Test Report: Docker_Linux_crio_arm64 22122

                    
                      022dd2780ab8206ac68153a1ee37fdbcc6da7ccd:2025-12-13:42761
                    
                

Test fail (44/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.51
44 TestAddons/parallel/Registry 16.42
45 TestAddons/parallel/RegistryCreds 0.55
46 TestAddons/parallel/Ingress 143.34
47 TestAddons/parallel/InspektorGadget 6.29
48 TestAddons/parallel/MetricsServer 6.36
50 TestAddons/parallel/CSI 41.54
51 TestAddons/parallel/Headlamp 3.41
52 TestAddons/parallel/CloudSpanner 5.3
53 TestAddons/parallel/LocalPath 8.38
54 TestAddons/parallel/NvidiaDevicePlugin 5.47
55 TestAddons/parallel/Yakd 6.39
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 502.16
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.51
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.42
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.47
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.4
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 735.25
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.16
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.78
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 2.3
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.37
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.66
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 3.02
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.09
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 109.76
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.05
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.31
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.27
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.25
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.28
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.26
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.74
279 TestMultiControlPlane/serial/RestartCluster 478.17
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 5.16
281 TestMultiControlPlane/serial/AddSecondaryNode 91.3
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 5.94
293 TestJSONOutput/pause/Command 1.76
299 TestJSONOutput/unpause/Command 1.71
358 TestKubernetesUpgrade 796.37
384 TestPause/serial/Pause 6.35
449 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 7200.072
x
+
TestAddons/serial/Volcano (0.51s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable volcano --alsologtostderr -v=1: exit status 11 (504.887682ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:19:43.930710   11490 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:19:43.931533   11490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:19:43.931581   11490 out.go:374] Setting ErrFile to fd 2...
	I1213 18:19:43.931603   11490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:19:43.931944   11490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:19:43.932296   11490 mustload.go:66] Loading cluster: addons-377325
	I1213 18:19:43.932770   11490 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:19:43.932810   11490 addons.go:622] checking whether the cluster is paused
	I1213 18:19:43.932955   11490 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:19:43.932986   11490 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:19:43.934289   11490 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:19:43.970519   11490 ssh_runner.go:195] Run: systemctl --version
	I1213 18:19:43.970575   11490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:19:43.991976   11490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:19:44.103808   11490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:19:44.103970   11490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:19:44.145071   11490 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:19:44.145092   11490 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:19:44.145098   11490 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:19:44.145106   11490 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:19:44.145110   11490 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:19:44.145113   11490 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:19:44.145116   11490 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:19:44.145119   11490 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:19:44.145122   11490 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:19:44.145129   11490 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:19:44.145132   11490 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:19:44.145135   11490 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:19:44.145138   11490 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:19:44.145141   11490 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:19:44.145144   11490 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:19:44.145148   11490 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:19:44.145151   11490 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:19:44.145155   11490 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:19:44.145157   11490 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:19:44.145161   11490 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:19:44.145165   11490 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:19:44.145168   11490 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:19:44.145171   11490 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:19:44.145173   11490 cri.go:89] found id: ""
	I1213 18:19:44.145227   11490 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:19:44.170376   11490 out.go:203] 
	W1213 18:19:44.185923   11490 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:19:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:19:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:19:44.186017   11490 out.go:285] * 
	* 
	W1213 18:19:44.295656   11490 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:19:44.316406   11490 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 10.331501ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-b6lxz" [e23f899f-6b28-4f63-adbd-2adb36c8f008] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003672135s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-zxcm2" [e19a41e7-ad9e-4d36-8a5b-cc0fea51183a] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003673149s
addons_test.go:394: (dbg) Run:  kubectl --context addons-377325 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-377325 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-377325 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.864365079s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 ip
2025/12/13 18:20:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable registry --alsologtostderr -v=1: exit status 11 (291.26503ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:20:10.928794   12449 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:20:10.929042   12449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:10.929054   12449 out.go:374] Setting ErrFile to fd 2...
	I1213 18:20:10.929060   12449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:10.929461   12449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:20:10.929841   12449 mustload.go:66] Loading cluster: addons-377325
	I1213 18:20:10.930532   12449 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:10.930551   12449 addons.go:622] checking whether the cluster is paused
	I1213 18:20:10.930733   12449 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:10.930755   12449 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:20:10.931901   12449 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:20:10.949403   12449 ssh_runner.go:195] Run: systemctl --version
	I1213 18:20:10.949467   12449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:20:10.972573   12449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:20:11.081168   12449 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:20:11.081264   12449 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:20:11.125925   12449 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:20:11.125952   12449 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:20:11.125958   12449 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:20:11.125962   12449 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:20:11.125966   12449 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:20:11.125970   12449 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:20:11.125973   12449 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:20:11.125977   12449 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:20:11.125980   12449 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:20:11.125987   12449 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:20:11.125991   12449 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:20:11.125994   12449 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:20:11.125998   12449 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:20:11.126002   12449 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:20:11.126005   12449 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:20:11.126013   12449 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:20:11.126021   12449 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:20:11.126026   12449 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:20:11.126029   12449 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:20:11.126032   12449 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:20:11.126038   12449 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:20:11.126042   12449 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:20:11.126045   12449 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:20:11.126048   12449 cri.go:89] found id: ""
	I1213 18:20:11.126110   12449 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:20:11.152071   12449 out.go:203] 
	W1213 18:20:11.155027   12449 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:20:11.155061   12449 out.go:285] * 
	* 
	W1213 18:20:11.158918   12449 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:20:11.161966   12449 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.42s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.55s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.161262ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-377325
addons_test.go:334: (dbg) Run:  kubectl --context addons-377325 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (292.638637ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:20:40.039313   13438 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:20:40.039630   13438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:40.039658   13438 out.go:374] Setting ErrFile to fd 2...
	I1213 18:20:40.039679   13438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:40.040001   13438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:20:40.040349   13438 mustload.go:66] Loading cluster: addons-377325
	I1213 18:20:40.040778   13438 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:40.040818   13438 addons.go:622] checking whether the cluster is paused
	I1213 18:20:40.040949   13438 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:40.040977   13438 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:20:40.041614   13438 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:20:40.064662   13438 ssh_runner.go:195] Run: systemctl --version
	I1213 18:20:40.064721   13438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:20:40.085628   13438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:20:40.192579   13438 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:20:40.192722   13438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:20:40.223793   13438 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:20:40.223843   13438 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:20:40.223848   13438 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:20:40.223853   13438 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:20:40.223857   13438 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:20:40.223861   13438 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:20:40.223864   13438 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:20:40.223867   13438 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:20:40.223871   13438 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:20:40.223877   13438 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:20:40.223880   13438 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:20:40.223884   13438 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:20:40.223888   13438 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:20:40.223891   13438 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:20:40.223894   13438 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:20:40.223898   13438 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:20:40.223906   13438 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:20:40.223910   13438 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:20:40.223913   13438 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:20:40.223916   13438 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:20:40.223930   13438 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:20:40.223939   13438 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:20:40.223942   13438 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:20:40.223945   13438 cri.go:89] found id: ""
	I1213 18:20:40.224000   13438 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:20:40.240658   13438 out.go:203] 
	W1213 18:20:40.243515   13438 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:20:40.243542   13438 out.go:285] * 
	* 
	W1213 18:20:40.247361   13438 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:20:40.250360   13438 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.55s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-377325 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-377325 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-377325 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [9cc03354-40fe-44af-be7a-154d70ea5d8b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [9cc03354-40fe-44af-be7a-154d70ea5d8b] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003386799s
I1213 18:20:32.492601    4637 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.349466727s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-377325 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-377325
helpers_test.go:244: (dbg) docker inspect addons-377325:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e",
	        "Created": "2025-12-13T18:17:30.991623713Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 6053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:17:31.075997651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e/hosts",
	        "LogPath": "/var/lib/docker/containers/d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e/d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e-json.log",
	        "Name": "/addons-377325",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-377325:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-377325",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e",
	                "LowerDir": "/var/lib/docker/overlay2/99be71c0b30ed4d376bc0a5a25800fc91dd30b6dda394c858acec718b94b33e5-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/99be71c0b30ed4d376bc0a5a25800fc91dd30b6dda394c858acec718b94b33e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/99be71c0b30ed4d376bc0a5a25800fc91dd30b6dda394c858acec718b94b33e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/99be71c0b30ed4d376bc0a5a25800fc91dd30b6dda394c858acec718b94b33e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-377325",
	                "Source": "/var/lib/docker/volumes/addons-377325/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-377325",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-377325",
	                "name.minikube.sigs.k8s.io": "addons-377325",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fed9ad655d5468e6cb16857658e8407795d260aa3c682c4e53643b51f1120c2b",
	            "SandboxKey": "/var/run/docker/netns/fed9ad655d54",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-377325": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:88:6b:c7:a6:e8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e114794b49d1e802f2fed399dde1a4b5db42d2b08d3c3681323b57e7b03fa8f",
	                    "EndpointID": "9f63b24e7e36ce34efb182fd09615d82ae13ed6d59dc4d906dbab7ad4a878e9c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-377325",
	                        "d1b08c8b0cba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-377325 -n addons-377325
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-377325 logs -n 25: (1.600134162s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-351651                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-351651 │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │ 13 Dec 25 18:17 UTC │
	│ start   │ --download-only -p binary-mirror-542781 --alsologtostderr --binary-mirror http://127.0.0.1:45875 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-542781   │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │                     │
	│ delete  │ -p binary-mirror-542781                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-542781   │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │ 13 Dec 25 18:17 UTC │
	│ addons  │ enable dashboard -p addons-377325                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │                     │
	│ addons  │ disable dashboard -p addons-377325                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │                     │
	│ start   │ -p addons-377325 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │ 13 Dec 25 18:19 UTC │
	│ addons  │ addons-377325 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:19 UTC │                     │
	│ addons  │ addons-377325 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:19 UTC │                     │
	│ addons  │ enable headlamp -p addons-377325 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:19 UTC │                     │
	│ addons  │ addons-377325 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:19 UTC │                     │
	│ ip      │ addons-377325 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:20 UTC │ 13 Dec 25 18:20 UTC │
	│ addons  │ addons-377325 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:20 UTC │                     │
	│ addons  │ addons-377325 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:20 UTC │                     │
	│ addons  │ addons-377325 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:20 UTC │                     │
	│ ssh     │ addons-377325 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:20 UTC │                     │
	│ addons  │ addons-377325 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:20 UTC │                     │
	│ addons  │ addons-377325 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:20 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-377325                                                                                                                                                                                                                                                                                                                                                                                           │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:20 UTC │ 13 Dec 25 18:20 UTC │
	│ addons  │ addons-377325 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:20 UTC │                     │
	│ ssh     │ addons-377325 ssh cat /opt/local-path-provisioner/pvc-0724d684-911a-4545-b553-e71f3e94668e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:20 UTC │ 13 Dec 25 18:20 UTC │
	│ addons  │ addons-377325 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:20 UTC │                     │
	│ addons  │ addons-377325 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:20 UTC │                     │
	│ addons  │ addons-377325 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:21 UTC │                     │
	│ addons  │ addons-377325 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:21 UTC │                     │
	│ ip      │ addons-377325 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:22 UTC │ 13 Dec 25 18:22 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:17:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:17:06.165344    5650 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:17:06.165576    5650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:17:06.165606    5650 out.go:374] Setting ErrFile to fd 2...
	I1213 18:17:06.165624    5650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:17:06.165925    5650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:17:06.166479    5650 out.go:368] Setting JSON to false
	I1213 18:17:06.167502    5650 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3579,"bootTime":1765646248,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:17:06.167622    5650 start.go:143] virtualization:  
	I1213 18:17:06.171327    5650 out.go:179] * [addons-377325] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:17:06.174362    5650 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:17:06.174458    5650 notify.go:221] Checking for updates...
	I1213 18:17:06.180279    5650 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:17:06.183361    5650 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:17:06.186360    5650 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:17:06.189445    5650 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:17:06.192548    5650 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:17:06.195684    5650 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:17:06.230813    5650 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:17:06.230954    5650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:17:06.294902    5650 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 18:17:06.285111034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:17:06.295020    5650 docker.go:319] overlay module found
	I1213 18:17:06.298258    5650 out.go:179] * Using the docker driver based on user configuration
	I1213 18:17:06.301246    5650 start.go:309] selected driver: docker
	I1213 18:17:06.301270    5650 start.go:927] validating driver "docker" against <nil>
	I1213 18:17:06.301283    5650 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:17:06.302071    5650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:17:06.363083    5650 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 18:17:06.353760297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:17:06.363248    5650 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 18:17:06.363493    5650 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 18:17:06.366551    5650 out.go:179] * Using Docker driver with root privileges
	I1213 18:17:06.369396    5650 cni.go:84] Creating CNI manager for ""
	I1213 18:17:06.369477    5650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:17:06.369496    5650 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 18:17:06.369588    5650 start.go:353] cluster config:
	{Name:addons-377325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377325 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1213 18:17:06.372904    5650 out.go:179] * Starting "addons-377325" primary control-plane node in "addons-377325" cluster
	I1213 18:17:06.376053    5650 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:17:06.379211    5650 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:17:06.382247    5650 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 18:17:06.382318    5650 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 18:17:06.382334    5650 cache.go:65] Caching tarball of preloaded images
	I1213 18:17:06.382351    5650 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:17:06.382453    5650 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 18:17:06.382465    5650 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 18:17:06.382869    5650 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/config.json ...
	I1213 18:17:06.382942    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/config.json: {Name:mkaaf44029fbe14b9df08ab6a9609ef9606bb7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:06.400760    5650 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 18:17:06.400945    5650 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 18:17:06.400984    5650 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 18:17:06.400993    5650 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 18:17:06.401028    5650 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 18:17:06.401035    5650 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1213 18:17:24.282113    5650 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1213 18:17:24.282154    5650 cache.go:243] Successfully downloaded all kic artifacts
	I1213 18:17:24.282192    5650 start.go:360] acquireMachinesLock for addons-377325: {Name:mkf44ed8b66583f628999561be83d83d1e36fea0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 18:17:24.282307    5650 start.go:364] duration metric: took 91.612µs to acquireMachinesLock for "addons-377325"
	I1213 18:17:24.282337    5650 start.go:93] Provisioning new machine with config: &{Name:addons-377325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377325 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 18:17:24.282404    5650 start.go:125] createHost starting for "" (driver="docker")
	I1213 18:17:24.285997    5650 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1213 18:17:24.286246    5650 start.go:159] libmachine.API.Create for "addons-377325" (driver="docker")
	I1213 18:17:24.286288    5650 client.go:173] LocalClient.Create starting
	I1213 18:17:24.286403    5650 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem
	I1213 18:17:24.882640    5650 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem
	I1213 18:17:25.108341    5650 cli_runner.go:164] Run: docker network inspect addons-377325 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 18:17:25.124309    5650 cli_runner.go:211] docker network inspect addons-377325 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 18:17:25.124408    5650 network_create.go:284] running [docker network inspect addons-377325] to gather additional debugging logs...
	I1213 18:17:25.124431    5650 cli_runner.go:164] Run: docker network inspect addons-377325
	W1213 18:17:25.142275    5650 cli_runner.go:211] docker network inspect addons-377325 returned with exit code 1
	I1213 18:17:25.142306    5650 network_create.go:287] error running [docker network inspect addons-377325]: docker network inspect addons-377325: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-377325 not found
	I1213 18:17:25.142321    5650 network_create.go:289] output of [docker network inspect addons-377325]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-377325 not found
	
	** /stderr **
	I1213 18:17:25.142429    5650 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:17:25.159472    5650 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bbb020}
	I1213 18:17:25.159519    5650 network_create.go:124] attempt to create docker network addons-377325 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 18:17:25.159574    5650 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-377325 addons-377325
	I1213 18:17:25.217912    5650 network_create.go:108] docker network addons-377325 192.168.49.0/24 created
	I1213 18:17:25.217944    5650 kic.go:121] calculated static IP "192.168.49.2" for the "addons-377325" container
	I1213 18:17:25.218017    5650 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 18:17:25.234398    5650 cli_runner.go:164] Run: docker volume create addons-377325 --label name.minikube.sigs.k8s.io=addons-377325 --label created_by.minikube.sigs.k8s.io=true
	I1213 18:17:25.252755    5650 oci.go:103] Successfully created a docker volume addons-377325
	I1213 18:17:25.252858    5650 cli_runner.go:164] Run: docker run --rm --name addons-377325-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377325 --entrypoint /usr/bin/test -v addons-377325:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 18:17:26.903998    5650 cli_runner.go:217] Completed: docker run --rm --name addons-377325-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377325 --entrypoint /usr/bin/test -v addons-377325:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.651094506s)
	I1213 18:17:26.904029    5650 oci.go:107] Successfully prepared a docker volume addons-377325
	I1213 18:17:26.904076    5650 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 18:17:26.904095    5650 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 18:17:26.904165    5650 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-377325:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 18:17:30.919873    5650 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-377325:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.015627968s)
	I1213 18:17:30.919903    5650 kic.go:203] duration metric: took 4.015804963s to extract preloaded images to volume ...
	W1213 18:17:30.920042    5650 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 18:17:30.920158    5650 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 18:17:30.976087    5650 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-377325 --name addons-377325 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377325 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-377325 --network addons-377325 --ip 192.168.49.2 --volume addons-377325:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 18:17:31.336454    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Running}}
	I1213 18:17:31.355757    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:31.379043    5650 cli_runner.go:164] Run: docker exec addons-377325 stat /var/lib/dpkg/alternatives/iptables
	I1213 18:17:31.433234    5650 oci.go:144] the created container "addons-377325" has a running status.
	I1213 18:17:31.433271    5650 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa...
	I1213 18:17:31.576804    5650 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 18:17:31.599774    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:31.626330    5650 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 18:17:31.626353    5650 kic_runner.go:114] Args: [docker exec --privileged addons-377325 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 18:17:31.685880    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:31.709248    5650 machine.go:94] provisionDockerMachine start ...
	I1213 18:17:31.709346    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:31.727791    5650 main.go:143] libmachine: Using SSH client type: native
	I1213 18:17:31.728174    5650 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 18:17:31.728193    5650 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 18:17:31.728897    5650 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 18:17:34.880391    5650 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-377325
	
	I1213 18:17:34.880460    5650 ubuntu.go:182] provisioning hostname "addons-377325"
	I1213 18:17:34.880536    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:34.897253    5650 main.go:143] libmachine: Using SSH client type: native
	I1213 18:17:34.897570    5650 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 18:17:34.897587    5650 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-377325 && echo "addons-377325" | sudo tee /etc/hostname
	I1213 18:17:35.055092    5650 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-377325
	
	I1213 18:17:35.055178    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:35.072733    5650 main.go:143] libmachine: Using SSH client type: native
	I1213 18:17:35.073074    5650 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 18:17:35.073091    5650 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-377325' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-377325/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-377325' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 18:17:35.225254    5650 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 18:17:35.225282    5650 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 18:17:35.225302    5650 ubuntu.go:190] setting up certificates
	I1213 18:17:35.225319    5650 provision.go:84] configureAuth start
	I1213 18:17:35.225382    5650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377325
	I1213 18:17:35.242802    5650 provision.go:143] copyHostCerts
	I1213 18:17:35.242908    5650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 18:17:35.243040    5650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 18:17:35.243103    5650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 18:17:35.243154    5650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.addons-377325 san=[127.0.0.1 192.168.49.2 addons-377325 localhost minikube]
	I1213 18:17:35.636228    5650 provision.go:177] copyRemoteCerts
	I1213 18:17:35.636295    5650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 18:17:35.636370    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:35.654416    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:17:35.756547    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 18:17:35.773517    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 18:17:35.790326    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 18:17:35.807497    5650 provision.go:87] duration metric: took 582.150808ms to configureAuth
	I1213 18:17:35.807523    5650 ubuntu.go:206] setting minikube options for container-runtime
	I1213 18:17:35.807742    5650 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:17:35.807845    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:35.824759    5650 main.go:143] libmachine: Using SSH client type: native
	I1213 18:17:35.825097    5650 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 18:17:35.825119    5650 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 18:17:36.132625    5650 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 18:17:36.132647    5650 machine.go:97] duration metric: took 4.423379072s to provisionDockerMachine
	I1213 18:17:36.132657    5650 client.go:176] duration metric: took 11.846357088s to LocalClient.Create
	I1213 18:17:36.132677    5650 start.go:167] duration metric: took 11.846433118s to libmachine.API.Create "addons-377325"
	I1213 18:17:36.132684    5650 start.go:293] postStartSetup for "addons-377325" (driver="docker")
	I1213 18:17:36.132694    5650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 18:17:36.132756    5650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 18:17:36.132803    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:36.150592    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:17:36.258125    5650 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 18:17:36.261755    5650 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 18:17:36.261781    5650 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 18:17:36.261793    5650 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 18:17:36.261870    5650 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 18:17:36.261896    5650 start.go:296] duration metric: took 129.20579ms for postStartSetup
	I1213 18:17:36.262232    5650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377325
	I1213 18:17:36.279585    5650 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/config.json ...
	I1213 18:17:36.279873    5650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 18:17:36.279915    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:36.299384    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:17:36.402546    5650 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 18:17:36.407480    5650 start.go:128] duration metric: took 12.125061927s to createHost
	I1213 18:17:36.407505    5650 start.go:83] releasing machines lock for "addons-377325", held for 12.125183544s
	I1213 18:17:36.407576    5650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377325
	I1213 18:17:36.424845    5650 ssh_runner.go:195] Run: cat /version.json
	I1213 18:17:36.424903    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:36.425187    5650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 18:17:36.425248    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:36.446723    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:17:36.450815    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:17:36.549817    5650 ssh_runner.go:195] Run: systemctl --version
	I1213 18:17:36.639265    5650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 18:17:36.673379    5650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 18:17:36.677272    5650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 18:17:36.677337    5650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 18:17:36.704311    5650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 18:17:36.704330    5650 start.go:496] detecting cgroup driver to use...
	I1213 18:17:36.704360    5650 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 18:17:36.704408    5650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 18:17:36.721092    5650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 18:17:36.733313    5650 docker.go:218] disabling cri-docker service (if available) ...
	I1213 18:17:36.733377    5650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 18:17:36.750310    5650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 18:17:36.768682    5650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 18:17:36.885544    5650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 18:17:37.021437    5650 docker.go:234] disabling docker service ...
	I1213 18:17:37.021571    5650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 18:17:37.045903    5650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 18:17:37.059388    5650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 18:17:37.182843    5650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 18:17:37.303695    5650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 18:17:37.316414    5650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 18:17:37.331451    5650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 18:17:37.331540    5650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.340307    5650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 18:17:37.340410    5650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.349593    5650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.358626    5650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.367423    5650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 18:17:37.375605    5650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.384088    5650 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.397213    5650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.405902    5650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 18:17:37.413059    5650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 18:17:37.413146    5650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 18:17:37.427194    5650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 18:17:37.435304    5650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:17:37.554388    5650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 18:17:37.732225    5650 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 18:17:37.732321    5650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 18:17:37.736232    5650 start.go:564] Will wait 60s for crictl version
	I1213 18:17:37.736296    5650 ssh_runner.go:195] Run: which crictl
	I1213 18:17:37.739814    5650 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 18:17:37.765302    5650 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 18:17:37.765441    5650 ssh_runner.go:195] Run: crio --version
	I1213 18:17:37.796482    5650 ssh_runner.go:195] Run: crio --version
	I1213 18:17:37.828076    5650 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 18:17:37.831016    5650 cli_runner.go:164] Run: docker network inspect addons-377325 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:17:37.847344    5650 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 18:17:37.851072    5650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 18:17:37.861103    5650 kubeadm.go:884] updating cluster {Name:addons-377325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377325 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 18:17:37.861236    5650 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 18:17:37.861294    5650 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:17:37.893842    5650 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:17:37.893869    5650 crio.go:433] Images already preloaded, skipping extraction
	I1213 18:17:37.893925    5650 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:17:37.922239    5650 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:17:37.922264    5650 cache_images.go:86] Images are preloaded, skipping loading
	I1213 18:17:37.922272    5650 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 18:17:37.922363    5650 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-377325 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-377325 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 18:17:37.922457    5650 ssh_runner.go:195] Run: crio config
	I1213 18:17:37.992739    5650 cni.go:84] Creating CNI manager for ""
	I1213 18:17:37.992761    5650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:17:37.992782    5650 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 18:17:37.992806    5650 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-377325 NodeName:addons-377325 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 18:17:37.992933    5650 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-377325"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 18:17:37.993033    5650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 18:17:38.000948    5650 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 18:17:38.001141    5650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 18:17:38.020789    5650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 18:17:38.035845    5650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 18:17:38.050671    5650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1213 18:17:38.066196    5650 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 18:17:38.070391    5650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 18:17:38.081392    5650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:17:38.196924    5650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:17:38.212420    5650 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325 for IP: 192.168.49.2
	I1213 18:17:38.212441    5650 certs.go:195] generating shared ca certs ...
	I1213 18:17:38.212456    5650 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:38.212608    5650 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 18:17:38.579950    5650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt ...
	I1213 18:17:38.579985    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt: {Name:mk2f407ae7978a5cf334863b6824308cf93b4a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:38.580177    5650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key ...
	I1213 18:17:38.580190    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key: {Name:mkeb7ff4c2cb1968fb6d9a7cd6276eef31fcc6eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:38.580278    5650 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 18:17:38.862741    5650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt ...
	I1213 18:17:38.862775    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt: {Name:mkfa7f7e3f20875cf22ab2ed8c3cfc16a80ee9ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:38.862957    5650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key ...
	I1213 18:17:38.862970    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key: {Name:mk1446a47835ef28b2059aa5658af7fa98c57ad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:38.863060    5650 certs.go:257] generating profile certs ...
	I1213 18:17:38.863118    5650 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.key
	I1213 18:17:38.863130    5650 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt with IP's: []
	I1213 18:17:39.270584    5650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt ...
	I1213 18:17:39.270633    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: {Name:mk1ec8956285149ceef36aacbe439c65f6350ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:39.270825    5650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.key ...
	I1213 18:17:39.270838    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.key: {Name:mk72f4efafb2648fe93915168325e2b985ffd41c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:39.270925    5650 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.key.00813dc7
	I1213 18:17:39.270944    5650 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.crt.00813dc7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 18:17:39.567949    5650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.crt.00813dc7 ...
	I1213 18:17:39.567981    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.crt.00813dc7: {Name:mk6b2dd4c39675e3aa614252695fb1cf173de2a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:39.568158    5650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.key.00813dc7 ...
	I1213 18:17:39.568176    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.key.00813dc7: {Name:mkfa6b7b6d106e68e398aaea691221fae913d661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:39.568257    5650 certs.go:382] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.crt.00813dc7 -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.crt
	I1213 18:17:39.568335    5650 certs.go:386] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.key.00813dc7 -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.key
	I1213 18:17:39.568388    5650 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.key
	I1213 18:17:39.568408    5650 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.crt with IP's: []
	I1213 18:17:39.901879    5650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.crt ...
	I1213 18:17:39.901912    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.crt: {Name:mkc6c28a231ef85233e4ebd475bd379d65375db2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:39.902091    5650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.key ...
	I1213 18:17:39.902105    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.key: {Name:mk39ac3efb272791f2fd5624547a5dddc0e5658b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:39.902291    5650 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 18:17:39.902335    5650 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 18:17:39.902366    5650 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 18:17:39.902395    5650 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 18:17:39.902962    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 18:17:39.921668    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 18:17:39.940625    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 18:17:39.959239    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 18:17:39.976975    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 18:17:39.994960    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 18:17:40.019627    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 18:17:40.047405    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 18:17:40.067854    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 18:17:40.090614    5650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 18:17:40.104222    5650 ssh_runner.go:195] Run: openssl version
	I1213 18:17:40.111022    5650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:17:40.119183    5650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 18:17:40.127194    5650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:17:40.131239    5650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:17:40.131327    5650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:17:40.173161    5650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 18:17:40.181078    5650 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 18:17:40.188767    5650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:17:40.192469    5650 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 18:17:40.192561    5650 kubeadm.go:401] StartCluster: {Name:addons-377325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377325 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:17:40.192655    5650 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:17:40.192726    5650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:17:40.219561    5650 cri.go:89] found id: ""
	I1213 18:17:40.219629    5650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 18:17:40.227572    5650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 18:17:40.235335    5650 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 18:17:40.235442    5650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:17:40.243166    5650 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 18:17:40.243191    5650 kubeadm.go:158] found existing configuration files:
	
	I1213 18:17:40.243242    5650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 18:17:40.250903    5650 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 18:17:40.250966    5650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 18:17:40.258265    5650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 18:17:40.265924    5650 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 18:17:40.265996    5650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:17:40.273410    5650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 18:17:40.280825    5650 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 18:17:40.280891    5650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:17:40.288100    5650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 18:17:40.295828    5650 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 18:17:40.295901    5650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:17:40.303305    5650 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 18:17:40.367937    5650 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 18:17:40.368314    5650 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 18:17:40.436356    5650 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 18:17:55.359161    5650 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 18:17:55.359218    5650 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 18:17:55.359323    5650 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 18:17:55.359384    5650 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 18:17:55.359422    5650 kubeadm.go:319] OS: Linux
	I1213 18:17:55.359474    5650 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 18:17:55.359526    5650 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 18:17:55.359577    5650 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 18:17:55.359627    5650 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 18:17:55.359679    5650 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 18:17:55.359739    5650 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 18:17:55.359788    5650 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 18:17:55.359840    5650 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 18:17:55.359891    5650 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 18:17:55.359967    5650 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 18:17:55.360065    5650 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 18:17:55.360159    5650 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 18:17:55.360225    5650 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 18:17:55.365071    5650 out.go:252]   - Generating certificates and keys ...
	I1213 18:17:55.365198    5650 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 18:17:55.365264    5650 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 18:17:55.365343    5650 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 18:17:55.365401    5650 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 18:17:55.365463    5650 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 18:17:55.365522    5650 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 18:17:55.365595    5650 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 18:17:55.365727    5650 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-377325 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 18:17:55.365784    5650 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 18:17:55.365913    5650 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-377325 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 18:17:55.366031    5650 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 18:17:55.366101    5650 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 18:17:55.366163    5650 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 18:17:55.366230    5650 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 18:17:55.366284    5650 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 18:17:55.366340    5650 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 18:17:55.366431    5650 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 18:17:55.366517    5650 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 18:17:55.366599    5650 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 18:17:55.366707    5650 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 18:17:55.366822    5650 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 18:17:55.369620    5650 out.go:252]   - Booting up control plane ...
	I1213 18:17:55.369769    5650 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 18:17:55.369885    5650 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 18:17:55.369986    5650 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 18:17:55.370094    5650 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 18:17:55.370193    5650 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 18:17:55.370300    5650 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 18:17:55.370388    5650 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 18:17:55.370431    5650 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 18:17:55.370564    5650 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 18:17:55.370672    5650 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 18:17:55.370741    5650 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.006829363s
	I1213 18:17:55.370837    5650 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 18:17:55.370923    5650 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1213 18:17:55.371016    5650 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 18:17:55.371098    5650 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 18:17:55.371177    5650 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.491560762s
	I1213 18:17:55.371247    5650 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.84068988s
	I1213 18:17:55.371329    5650 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502190953s
	I1213 18:17:55.371438    5650 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 18:17:55.371566    5650 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 18:17:55.371628    5650 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 18:17:55.371903    5650 kubeadm.go:319] [mark-control-plane] Marking the node addons-377325 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 18:17:55.371980    5650 kubeadm.go:319] [bootstrap-token] Using token: k0r6nn.wrth4ud4rzw0uc9v
	I1213 18:17:55.376846    5650 out.go:252]   - Configuring RBAC rules ...
	I1213 18:17:55.376991    5650 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 18:17:55.377214    5650 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 18:17:55.377379    5650 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 18:17:55.377528    5650 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 18:17:55.377654    5650 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 18:17:55.377757    5650 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 18:17:55.377900    5650 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 18:17:55.377962    5650 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 18:17:55.378028    5650 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 18:17:55.378042    5650 kubeadm.go:319] 
	I1213 18:17:55.378104    5650 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 18:17:55.378116    5650 kubeadm.go:319] 
	I1213 18:17:55.378200    5650 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 18:17:55.378207    5650 kubeadm.go:319] 
	I1213 18:17:55.378237    5650 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 18:17:55.378311    5650 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 18:17:55.378378    5650 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 18:17:55.378385    5650 kubeadm.go:319] 
	I1213 18:17:55.378449    5650 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 18:17:55.378461    5650 kubeadm.go:319] 
	I1213 18:17:55.378526    5650 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 18:17:55.378541    5650 kubeadm.go:319] 
	I1213 18:17:55.378608    5650 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 18:17:55.378699    5650 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 18:17:55.378792    5650 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 18:17:55.378804    5650 kubeadm.go:319] 
	I1213 18:17:55.378928    5650 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 18:17:55.379030    5650 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 18:17:55.379038    5650 kubeadm.go:319] 
	I1213 18:17:55.379136    5650 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token k0r6nn.wrth4ud4rzw0uc9v \
	I1213 18:17:55.379270    5650 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c855727c547190fbfc8dabe20c5acea2e54aecf6fee3a83d21da995a7e3060d \
	I1213 18:17:55.379296    5650 kubeadm.go:319] 	--control-plane 
	I1213 18:17:55.379338    5650 kubeadm.go:319] 
	I1213 18:17:55.379470    5650 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 18:17:55.379494    5650 kubeadm.go:319] 
	I1213 18:17:55.379592    5650 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token k0r6nn.wrth4ud4rzw0uc9v \
	I1213 18:17:55.379741    5650 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c855727c547190fbfc8dabe20c5acea2e54aecf6fee3a83d21da995a7e3060d 
	I1213 18:17:55.379776    5650 cni.go:84] Creating CNI manager for ""
	I1213 18:17:55.379800    5650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:17:55.384743    5650 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 18:17:55.387752    5650 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 18:17:55.391775    5650 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 18:17:55.391793    5650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 18:17:55.406052    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 18:17:55.707532    5650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 18:17:55.707717    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:55.707822    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-377325 minikube.k8s.io/updated_at=2025_12_13T18_17_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=addons-377325 minikube.k8s.io/primary=true
	I1213 18:17:55.721314    5650 ops.go:34] apiserver oom_adj: -16
	I1213 18:17:55.835054    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:56.335833    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:56.835140    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:57.336140    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:57.836106    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:58.335200    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:58.835128    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:59.335914    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:59.835140    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:59.985689    5650 kubeadm.go:1114] duration metric: took 4.278030452s to wait for elevateKubeSystemPrivileges
	I1213 18:17:59.985719    5650 kubeadm.go:403] duration metric: took 19.793161926s to StartCluster
	I1213 18:17:59.985737    5650 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:59.985847    5650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:17:59.986233    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:59.986409    5650 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 18:17:59.986583    5650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 18:17:59.986836    5650 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:17:59.986873    5650 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 18:17:59.986945    5650 addons.go:70] Setting yakd=true in profile "addons-377325"
	I1213 18:17:59.986960    5650 addons.go:239] Setting addon yakd=true in "addons-377325"
	I1213 18:17:59.986985    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:17:59.987474    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:59.987851    5650 addons.go:70] Setting inspektor-gadget=true in profile "addons-377325"
	I1213 18:17:59.987869    5650 addons.go:239] Setting addon inspektor-gadget=true in "addons-377325"
	I1213 18:17:59.987891    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:17:59.988309    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:59.990692    5650 addons.go:70] Setting metrics-server=true in profile "addons-377325"
	I1213 18:17:59.990725    5650 addons.go:239] Setting addon metrics-server=true in "addons-377325"
	I1213 18:17:59.990856    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:17:59.991492    5650 out.go:179] * Verifying Kubernetes components...
	I1213 18:17:59.992374    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:59.991649    5650 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-377325"
	I1213 18:17:59.993921    5650 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-377325"
	I1213 18:17:59.993956    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:17:59.994423    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:59.997152    5650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:17:59.991657    5650 addons.go:70] Setting cloud-spanner=true in profile "addons-377325"
	I1213 18:17:59.997273    5650 addons.go:239] Setting addon cloud-spanner=true in "addons-377325"
	I1213 18:17:59.997332    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:17:59.997778    5650 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-377325"
	I1213 18:17:59.997793    5650 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-377325"
	I1213 18:17:59.997813    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:17:59.998209    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:59.998558    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:59.991662    5650 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-377325"
	I1213 18:18:00.005116    5650 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-377325"
	I1213 18:18:00.005158    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.005624    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.018135    5650 addons.go:70] Setting registry=true in profile "addons-377325"
	I1213 18:18:00.019507    5650 addons.go:239] Setting addon registry=true in "addons-377325"
	I1213 18:17:59.991665    5650 addons.go:70] Setting default-storageclass=true in profile "addons-377325"
	I1213 18:17:59.991668    5650 addons.go:70] Setting gcp-auth=true in profile "addons-377325"
	I1213 18:17:59.991671    5650 addons.go:70] Setting ingress=true in profile "addons-377325"
	I1213 18:17:59.991674    5650 addons.go:70] Setting ingress-dns=true in profile "addons-377325"
	I1213 18:18:00.020054    5650 addons.go:239] Setting addon ingress-dns=true in "addons-377325"
	I1213 18:18:00.020905    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.021667    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.051891    5650 addons.go:70] Setting registry-creds=true in profile "addons-377325"
	I1213 18:18:00.072540    5650 addons.go:239] Setting addon registry-creds=true in "addons-377325"
	I1213 18:18:00.072609    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.073226    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.051907    5650 addons.go:70] Setting storage-provisioner=true in profile "addons-377325"
	I1213 18:18:00.075743    5650 addons.go:239] Setting addon storage-provisioner=true in "addons-377325"
	I1213 18:18:00.075818    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.051911    5650 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-377325"
	I1213 18:18:00.051915    5650 addons.go:70] Setting volcano=true in profile "addons-377325"
	I1213 18:18:00.051918    5650 addons.go:70] Setting volumesnapshots=true in profile "addons-377325"
	I1213 18:18:00.052008    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.052025    5650 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-377325"
	I1213 18:18:00.052040    5650 mustload.go:66] Loading cluster: addons-377325
	I1213 18:18:00.052060    5650 addons.go:239] Setting addon ingress=true in "addons-377325"
	I1213 18:18:00.076233    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.077744    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.082404    5650 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-377325"
	I1213 18:18:00.082922    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.091581    5650 addons.go:239] Setting addon volcano=true in "addons-377325"
	I1213 18:18:00.091718    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.092336    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.106934    5650 addons.go:239] Setting addon volumesnapshots=true in "addons-377325"
	I1213 18:18:00.107062    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.107748    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.112545    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.135574    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.162434    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.186737    5650 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 18:18:00.187371    5650 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:18:00.187663    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.261312    5650 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1213 18:18:00.269368    5650 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 18:18:00.269454    5650 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 18:18:00.269569    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.277175    5650 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 18:18:00.277200    5650 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 18:18:00.277276    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.296171    5650 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 18:18:00.305688    5650 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 18:18:00.308649    5650 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 18:18:00.308681    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 18:18:00.308784    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.309149    5650 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 18:18:00.309674    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 18:18:00.309776    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.339801    5650 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 18:18:00.344904    5650 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 18:18:00.344994    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 18:18:00.345155    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.379731    5650 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 18:18:00.405990    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 18:18:00.409624    5650 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 18:18:00.409650    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 18:18:00.409722    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.459757    5650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 18:18:00.485622    5650 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 18:18:00.485949    5650 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 18:18:00.490682    5650 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 18:18:00.493590    5650 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 18:18:00.493779    5650 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 18:18:00.493825    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 18:18:00.493955    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.501374    5650 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 18:18:00.501648    5650 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 18:18:00.501677    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 18:18:00.501802    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.512390    5650 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:18:00.512430    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 18:18:00.512496    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.518948    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 18:18:00.521188    5650 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-377325"
	I1213 18:18:00.525215    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.525760    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.546828    5650 addons.go:239] Setting addon default-storageclass=true in "addons-377325"
	I1213 18:18:00.546867    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.547294    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.561848    5650 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 18:18:00.569598    5650 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 18:18:00.569628    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 18:18:00.569702    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	W1213 18:18:00.578275    5650 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 18:18:00.578786    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.580456    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 18:18:00.580579    5650 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 18:18:00.595572    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 18:18:00.601258    5650 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 18:18:00.601469    5650 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 18:18:00.601483    5650 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 18:18:00.601557    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.604558    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.607282    5650 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 18:18:00.607377    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 18:18:00.607470    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.629341    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 18:18:00.633144    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 18:18:00.635764    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 18:18:00.640405    5650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:18:00.643680    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.644657    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 18:18:00.651220    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 18:18:00.654219    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 18:18:00.654284    5650 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 18:18:00.654387    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.676747    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.684638    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.705200    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.710806    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.742374    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.760177    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.762398    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.772851    5650 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 18:18:00.772872    5650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 18:18:00.772932    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.816531    5650 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 18:18:00.820754    5650 out.go:179]   - Using image docker.io/busybox:stable
	I1213 18:18:00.821106    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.823426    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.831204    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.834842    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.837323    5650 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 18:18:00.837342    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 18:18:00.837407    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	W1213 18:18:00.865282    5650 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 18:18:00.865315    5650 retry.go:31] will retry after 236.13086ms: ssh: handshake failed: EOF
	I1213 18:18:00.867785    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.886313    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:01.394703    5650 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 18:18:01.394723    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 18:18:01.455726    5650 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 18:18:01.455746    5650 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 18:18:01.500240    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 18:18:01.505405    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 18:18:01.544018    5650 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 18:18:01.544122    5650 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 18:18:01.544600    5650 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 18:18:01.544646    5650 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 18:18:01.599293    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:18:01.644504    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 18:18:01.651493    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:18:01.653610    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 18:18:01.655581    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 18:18:01.657965    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 18:18:01.670574    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 18:18:01.674852    5650 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 18:18:01.674927    5650 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 18:18:01.694810    5650 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 18:18:01.694888    5650 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 18:18:01.724080    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 18:18:01.739626    5650 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 18:18:01.739696    5650 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 18:18:01.779412    5650 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 18:18:01.779482    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 18:18:01.790320    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 18:18:01.790394    5650 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 18:18:01.944793    5650 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 18:18:01.944872    5650 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 18:18:01.982425    5650 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 18:18:01.982507    5650 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 18:18:01.986975    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 18:18:01.987059    5650 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 18:18:02.023069    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 18:18:02.089439    5650 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 18:18:02.089517    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 18:18:02.094666    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 18:18:02.174983    5650 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 18:18:02.175059    5650 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 18:18:02.197072    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 18:18:02.197151    5650 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 18:18:02.205214    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 18:18:02.361450    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 18:18:02.361474    5650 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 18:18:02.405576    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 18:18:02.405599    5650 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 18:18:02.437765    5650 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.977962945s)
	I1213 18:18:02.437795    5650 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 18:18:02.437867    5650 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.797441192s)
	I1213 18:18:02.438604    5650 node_ready.go:35] waiting up to 6m0s for node "addons-377325" to be "Ready" ...
	I1213 18:18:02.710953    5650 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 18:18:02.711024    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 18:18:02.714190    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 18:18:02.714268    5650 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 18:18:02.942976    5650 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-377325" context rescaled to 1 replicas
	I1213 18:18:02.944800    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 18:18:03.145551    5650 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 18:18:03.145622    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 18:18:03.374753    5650 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 18:18:03.374774    5650 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 18:18:03.560305    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.059983314s)
	I1213 18:18:03.560419    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.054952097s)
	I1213 18:18:03.560500    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.961128054s)
	I1213 18:18:03.560754    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.916178883s)
	I1213 18:18:03.573952    5650 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 18:18:03.574024    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 18:18:03.678806    5650 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 18:18:03.678831    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 18:18:03.756211    5650 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 18:18:03.756237    5650 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 18:18:03.830078    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1213 18:18:04.442752    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:05.558374    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.906799654s)
	I1213 18:18:05.558481    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.904805315s)
	I1213 18:18:05.602501    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.946832161s)
	W1213 18:18:06.455063    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:06.470501    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.812448043s)
	I1213 18:18:06.470535    5650 addons.go:495] Verifying addon ingress=true in "addons-377325"
	I1213 18:18:06.470690    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.80004404s)
	I1213 18:18:06.470868    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.746707737s)
	I1213 18:18:06.470949    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.447804056s)
	I1213 18:18:06.470962    5650 addons.go:495] Verifying addon metrics-server=true in "addons-377325"
	I1213 18:18:06.470990    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.37625586s)
	I1213 18:18:06.471004    5650 addons.go:495] Verifying addon registry=true in "addons-377325"
	I1213 18:18:06.471438    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.266145411s)
	I1213 18:18:06.471718    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.526841788s)
	W1213 18:18:06.471747    5650 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 18:18:06.471763    5650 retry.go:31] will retry after 150.352069ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 18:18:06.473701    5650 out.go:179] * Verifying ingress addon...
	I1213 18:18:06.475862    5650 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-377325 service yakd-dashboard -n yakd-dashboard
	
	I1213 18:18:06.475891    5650 out.go:179] * Verifying registry addon...
	I1213 18:18:06.478883    5650 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 18:18:06.480712    5650 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 18:18:06.489604    5650 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 18:18:06.489626    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:06.489973    5650 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 18:18:06.489986    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:06.622975    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 18:18:06.762390    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.932254781s)
	I1213 18:18:06.762425    5650 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-377325"
	I1213 18:18:06.765240    5650 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 18:18:06.769732    5650 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 18:18:06.776082    5650 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 18:18:06.776105    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:06.982646    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:06.984917    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:07.273776    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:07.483271    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:07.483420    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:07.773093    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:07.982768    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:07.984579    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:08.235287    5650 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 18:18:08.235387    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:08.254697    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:08.273638    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:08.370130    5650 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 18:18:08.382559    5650 addons.go:239] Setting addon gcp-auth=true in "addons-377325"
	I1213 18:18:08.382647    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:08.383130    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:08.400656    5650 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 18:18:08.400723    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:08.417807    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:08.482035    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:08.484113    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:08.772552    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:08.941667    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:08.982630    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:08.983898    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:09.274388    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:09.437882    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.814860118s)
	I1213 18:18:09.437970    5650 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.037295976s)
	I1213 18:18:09.441454    5650 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 18:18:09.444456    5650 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 18:18:09.447348    5650 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 18:18:09.447387    5650 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 18:18:09.460125    5650 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 18:18:09.460148    5650 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 18:18:09.472713    5650 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 18:18:09.472735    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 18:18:09.484967    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:09.486068    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:09.488990    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 18:18:09.773739    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:09.996180    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:10.005191    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:10.022971    5650 addons.go:495] Verifying addon gcp-auth=true in "addons-377325"
	I1213 18:18:10.026492    5650 out.go:179] * Verifying gcp-auth addon...
	I1213 18:18:10.033796    5650 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 18:18:10.042784    5650 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 18:18:10.042813    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:10.273229    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:10.481846    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:10.483731    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:10.536605    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:10.773662    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:10.982581    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:10.983953    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:11.043732    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:11.273031    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:11.442028    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:11.482236    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:11.484312    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:11.537120    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:11.773316    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:11.981969    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:11.984164    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:12.037002    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:12.273049    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:12.483390    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:12.483704    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:12.537246    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:12.773216    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:12.982494    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:12.984459    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:13.037283    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:13.273286    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:13.442155    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:13.484173    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:13.484583    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:13.537047    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:13.774086    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:13.983657    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:13.983800    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:14.036594    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:14.273475    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:14.482065    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:14.484001    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:14.537100    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:14.772718    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:14.982536    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:14.983030    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:15.042511    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:15.272966    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:15.482513    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:15.484130    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:15.537076    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:15.773650    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:15.941182    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:15.982370    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:15.984201    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:16.037357    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:16.272258    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:16.481967    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:16.484257    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:16.536968    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:16.772731    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:16.982435    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:16.983507    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:17.037250    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:17.273381    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:17.481824    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:17.483772    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:17.536576    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:17.773647    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:17.941266    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:17.982067    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:17.983850    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:18.036623    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:18.272574    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:18.482399    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:18.483376    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:18.537052    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:18.773327    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:18.982586    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:18.983752    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:19.037254    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:19.273865    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:19.481793    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:19.483701    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:19.537377    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:19.774004    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:19.941687    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:19.982601    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:19.983562    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:20.037981    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:20.273082    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:20.482135    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:20.484015    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:20.537206    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:20.773337    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:20.982448    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:20.983216    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:21.036930    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:21.273113    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:21.481887    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:21.484301    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:21.537304    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:21.773707    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:21.982591    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:21.983508    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:22.037608    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:22.272357    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:22.442237    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:22.482488    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:22.484528    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:22.537279    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:22.772963    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:22.982028    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:22.984003    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:23.036805    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:23.272935    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:23.483332    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:23.484579    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:23.537665    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:23.774161    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:23.981811    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:23.983865    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:24.036560    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:24.273876    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:24.482213    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:24.483354    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:24.537615    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:24.772463    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:24.942037    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:24.982161    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:24.984268    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:25.037070    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:25.273077    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:25.481947    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:25.484695    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:25.536589    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:25.772937    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:25.982092    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:25.983762    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:26.036491    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:26.272499    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:26.482002    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:26.483590    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:26.537403    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:26.773209    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:26.942117    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:26.984498    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:26.988410    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:27.038499    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:27.272489    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:27.482051    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:27.483809    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:27.536513    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:27.772860    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:27.982739    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:27.983871    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:28.037056    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:28.273290    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:28.481800    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:28.484103    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:28.536823    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:28.772802    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:28.982379    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:28.983515    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:29.037193    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:29.273233    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:29.441771    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:29.481699    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:29.483508    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:29.537316    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:29.773659    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:29.982815    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:29.983301    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:30.037753    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:30.273435    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:30.482131    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:30.484537    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:30.537419    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:30.773347    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:30.982272    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:30.984021    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:31.036887    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:31.272984    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:31.441828    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:31.481645    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:31.483589    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:31.537634    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:31.772520    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:31.982545    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:31.991300    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:32.036596    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:32.272511    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:32.482027    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:32.484007    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:32.536725    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:32.772903    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:32.982978    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:32.983401    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:33.037460    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:33.273388    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:33.442425    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:33.482868    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:33.484647    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:33.537574    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:33.772535    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:33.982974    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:33.984349    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:34.037594    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:34.272530    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:34.483902    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:34.484162    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:34.536749    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:34.772432    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:34.981799    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:34.983921    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:35.036591    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:35.273944    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:35.483304    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:35.483437    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:35.537063    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:35.772905    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:35.941910    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:35.982415    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:35.984191    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:36.037126    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:36.272972    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:36.483301    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:36.483945    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:36.536736    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:36.772765    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:36.983922    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:36.985379    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:37.037614    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:37.272572    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:37.482720    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:37.484004    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:37.536544    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:37.773590    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:37.982101    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:37.985257    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:38.037364    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:38.273286    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:38.442335    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:38.482244    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:38.484520    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:38.537240    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:38.773640    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:38.983550    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:38.986108    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:39.037153    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:39.273497    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:39.482395    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:39.484426    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:39.537334    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:39.773413    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:39.982485    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:39.984653    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:40.041846    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:40.273457    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:40.482492    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:40.483870    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:40.536712    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:40.773440    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:40.941880    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:40.981992    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:40.984034    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:41.036596    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:41.272968    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:41.483300    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:41.483798    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:41.536904    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:41.779924    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:41.973434    5650 node_ready.go:49] node "addons-377325" is "Ready"
	I1213 18:18:41.973516    5650 node_ready.go:38] duration metric: took 39.534877573s for node "addons-377325" to be "Ready" ...
	I1213 18:18:41.973543    5650 api_server.go:52] waiting for apiserver process to appear ...
	I1213 18:18:41.973630    5650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:18:41.994324    5650 api_server.go:72] duration metric: took 42.007888474s to wait for apiserver process to appear ...
	I1213 18:18:41.994400    5650 api_server.go:88] waiting for apiserver healthz status ...
	I1213 18:18:41.994433    5650 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 18:18:42.007246    5650 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 18:18:42.010467    5650 api_server.go:141] control plane version: v1.34.2
	I1213 18:18:42.010501    5650 api_server.go:131] duration metric: took 16.08007ms to wait for apiserver health ...
	I1213 18:18:42.010512    5650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 18:18:42.010916    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:42.017884    5650 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 18:18:42.017980    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:42.030928    5650 system_pods.go:59] 19 kube-system pods found
	I1213 18:18:42.031022    5650 system_pods.go:61] "coredns-66bc5c9577-6ct6w" [c6b2d853-3212-44d5-9a75-06889a4d9dfd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 18:18:42.031044    5650 system_pods.go:61] "csi-hostpath-attacher-0" [615f4b0a-9214-4b1d-82ce-6aa31f437ac8] Pending
	I1213 18:18:42.031066    5650 system_pods.go:61] "csi-hostpath-resizer-0" [b4d59a0b-7aee-4f55-87e5-2d3348509418] Pending
	I1213 18:18:42.031096    5650 system_pods.go:61] "csi-hostpathplugin-rlkjk" [59f61c6c-d034-49db-9bda-0afcdfb3e18b] Pending
	I1213 18:18:42.031123    5650 system_pods.go:61] "etcd-addons-377325" [d647e242-ca7d-448b-818e-6dc5efeaa694] Running
	I1213 18:18:42.031146    5650 system_pods.go:61] "kindnet-rtw78" [8e27fa8a-3f82-452a-b22c-b8a04db740b0] Running
	I1213 18:18:42.031179    5650 system_pods.go:61] "kube-apiserver-addons-377325" [5c61ee91-168f-4aad-b57d-39f41f5cb7f0] Running
	I1213 18:18:42.031199    5650 system_pods.go:61] "kube-controller-manager-addons-377325" [9d75c46d-b406-4fdc-bceb-92fec5da3b5c] Running
	I1213 18:18:42.031225    5650 system_pods.go:61] "kube-ingress-dns-minikube" [340a94e2-d09a-452b-99fd-0ac69b9d39dc] Pending
	I1213 18:18:42.031259    5650 system_pods.go:61] "kube-proxy-m8qkk" [850ee62f-39ba-438b-a2a9-88d3ac38d253] Running
	I1213 18:18:42.031281    5650 system_pods.go:61] "kube-scheduler-addons-377325" [ef2e124c-ca44-4dfc-954b-d57337637342] Running
	I1213 18:18:42.031302    5650 system_pods.go:61] "metrics-server-85b7d694d7-xj9z5" [16a6665b-52ee-4f79-9a95-e9367d750ab1] Pending
	I1213 18:18:42.031345    5650 system_pods.go:61] "nvidia-device-plugin-daemonset-qfgpv" [0270c6b1-ee5d-4441-ae6f-18e3e0423c29] Pending
	I1213 18:18:42.031364    5650 system_pods.go:61] "registry-6b586f9694-b6lxz" [e23f899f-6b28-4f63-adbd-2adb36c8f008] Pending
	I1213 18:18:42.031386    5650 system_pods.go:61] "registry-creds-764b6fb674-f9qf2" [e714a411-3862-4ffa-a880-421fa8708466] Pending
	I1213 18:18:42.031425    5650 system_pods.go:61] "registry-proxy-zxcm2" [e19a41e7-ad9e-4d36-8a5b-cc0fea51183a] Pending
	I1213 18:18:42.031458    5650 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4ddpz" [87a0764e-abbb-468b-b2e5-b23a5e3eeae7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:42.031478    5650 system_pods.go:61] "snapshot-controller-7d9fbc56b8-sl9gg" [3b55e0b4-6d97-445d-9a0f-24d031fdf6a8] Pending
	I1213 18:18:42.031515    5650 system_pods.go:61] "storage-provisioner" [034f2b21-7609-45db-a977-8ec33924ac6b] Pending
	I1213 18:18:42.031536    5650 system_pods.go:74] duration metric: took 21.016721ms to wait for pod list to return data ...
	I1213 18:18:42.031559    5650 default_sa.go:34] waiting for default service account to be created ...
	I1213 18:18:42.119193    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:42.124544    5650 default_sa.go:45] found service account: "default"
	I1213 18:18:42.124626    5650 default_sa.go:55] duration metric: took 93.026726ms for default service account to be created ...
	I1213 18:18:42.124661    5650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 18:18:42.178618    5650 system_pods.go:86] 19 kube-system pods found
	I1213 18:18:42.178673    5650 system_pods.go:89] "coredns-66bc5c9577-6ct6w" [c6b2d853-3212-44d5-9a75-06889a4d9dfd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 18:18:42.178685    5650 system_pods.go:89] "csi-hostpath-attacher-0" [615f4b0a-9214-4b1d-82ce-6aa31f437ac8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 18:18:42.178733    5650 system_pods.go:89] "csi-hostpath-resizer-0" [b4d59a0b-7aee-4f55-87e5-2d3348509418] Pending
	I1213 18:18:42.178746    5650 system_pods.go:89] "csi-hostpathplugin-rlkjk" [59f61c6c-d034-49db-9bda-0afcdfb3e18b] Pending
	I1213 18:18:42.178751    5650 system_pods.go:89] "etcd-addons-377325" [d647e242-ca7d-448b-818e-6dc5efeaa694] Running
	I1213 18:18:42.178756    5650 system_pods.go:89] "kindnet-rtw78" [8e27fa8a-3f82-452a-b22c-b8a04db740b0] Running
	I1213 18:18:42.178768    5650 system_pods.go:89] "kube-apiserver-addons-377325" [5c61ee91-168f-4aad-b57d-39f41f5cb7f0] Running
	I1213 18:18:42.178772    5650 system_pods.go:89] "kube-controller-manager-addons-377325" [9d75c46d-b406-4fdc-bceb-92fec5da3b5c] Running
	I1213 18:18:42.178777    5650 system_pods.go:89] "kube-ingress-dns-minikube" [340a94e2-d09a-452b-99fd-0ac69b9d39dc] Pending
	I1213 18:18:42.178796    5650 system_pods.go:89] "kube-proxy-m8qkk" [850ee62f-39ba-438b-a2a9-88d3ac38d253] Running
	I1213 18:18:42.178805    5650 system_pods.go:89] "kube-scheduler-addons-377325" [ef2e124c-ca44-4dfc-954b-d57337637342] Running
	I1213 18:18:42.178810    5650 system_pods.go:89] "metrics-server-85b7d694d7-xj9z5" [16a6665b-52ee-4f79-9a95-e9367d750ab1] Pending
	I1213 18:18:42.178825    5650 system_pods.go:89] "nvidia-device-plugin-daemonset-qfgpv" [0270c6b1-ee5d-4441-ae6f-18e3e0423c29] Pending
	I1213 18:18:42.178838    5650 system_pods.go:89] "registry-6b586f9694-b6lxz" [e23f899f-6b28-4f63-adbd-2adb36c8f008] Pending
	I1213 18:18:42.178844    5650 system_pods.go:89] "registry-creds-764b6fb674-f9qf2" [e714a411-3862-4ffa-a880-421fa8708466] Pending
	I1213 18:18:42.178855    5650 system_pods.go:89] "registry-proxy-zxcm2" [e19a41e7-ad9e-4d36-8a5b-cc0fea51183a] Pending
	I1213 18:18:42.178861    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ddpz" [87a0764e-abbb-468b-b2e5-b23a5e3eeae7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:42.178866    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sl9gg" [3b55e0b4-6d97-445d-9a0f-24d031fdf6a8] Pending
	I1213 18:18:42.178880    5650 system_pods.go:89] "storage-provisioner" [034f2b21-7609-45db-a977-8ec33924ac6b] Pending
	I1213 18:18:42.178907    5650 retry.go:31] will retry after 287.551206ms: missing components: kube-dns
	I1213 18:18:42.276046    5650 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 18:18:42.276070    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:42.481599    5650 system_pods.go:86] 19 kube-system pods found
	I1213 18:18:42.481641    5650 system_pods.go:89] "coredns-66bc5c9577-6ct6w" [c6b2d853-3212-44d5-9a75-06889a4d9dfd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 18:18:42.481690    5650 system_pods.go:89] "csi-hostpath-attacher-0" [615f4b0a-9214-4b1d-82ce-6aa31f437ac8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 18:18:42.481712    5650 system_pods.go:89] "csi-hostpath-resizer-0" [b4d59a0b-7aee-4f55-87e5-2d3348509418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 18:18:42.481733    5650 system_pods.go:89] "csi-hostpathplugin-rlkjk" [59f61c6c-d034-49db-9bda-0afcdfb3e18b] Pending
	I1213 18:18:42.481756    5650 system_pods.go:89] "etcd-addons-377325" [d647e242-ca7d-448b-818e-6dc5efeaa694] Running
	I1213 18:18:42.481768    5650 system_pods.go:89] "kindnet-rtw78" [8e27fa8a-3f82-452a-b22c-b8a04db740b0] Running
	I1213 18:18:42.481772    5650 system_pods.go:89] "kube-apiserver-addons-377325" [5c61ee91-168f-4aad-b57d-39f41f5cb7f0] Running
	I1213 18:18:42.481777    5650 system_pods.go:89] "kube-controller-manager-addons-377325" [9d75c46d-b406-4fdc-bceb-92fec5da3b5c] Running
	I1213 18:18:42.481802    5650 system_pods.go:89] "kube-ingress-dns-minikube" [340a94e2-d09a-452b-99fd-0ac69b9d39dc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 18:18:42.481808    5650 system_pods.go:89] "kube-proxy-m8qkk" [850ee62f-39ba-438b-a2a9-88d3ac38d253] Running
	I1213 18:18:42.481820    5650 system_pods.go:89] "kube-scheduler-addons-377325" [ef2e124c-ca44-4dfc-954b-d57337637342] Running
	I1213 18:18:42.481824    5650 system_pods.go:89] "metrics-server-85b7d694d7-xj9z5" [16a6665b-52ee-4f79-9a95-e9367d750ab1] Pending
	I1213 18:18:42.481832    5650 system_pods.go:89] "nvidia-device-plugin-daemonset-qfgpv" [0270c6b1-ee5d-4441-ae6f-18e3e0423c29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 18:18:42.481847    5650 system_pods.go:89] "registry-6b586f9694-b6lxz" [e23f899f-6b28-4f63-adbd-2adb36c8f008] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 18:18:42.481852    5650 system_pods.go:89] "registry-creds-764b6fb674-f9qf2" [e714a411-3862-4ffa-a880-421fa8708466] Pending
	I1213 18:18:42.481892    5650 system_pods.go:89] "registry-proxy-zxcm2" [e19a41e7-ad9e-4d36-8a5b-cc0fea51183a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 18:18:42.481909    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ddpz" [87a0764e-abbb-468b-b2e5-b23a5e3eeae7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:42.481918    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sl9gg" [3b55e0b4-6d97-445d-9a0f-24d031fdf6a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:42.481928    5650 system_pods.go:89] "storage-provisioner" [034f2b21-7609-45db-a977-8ec33924ac6b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 18:18:42.481943    5650 retry.go:31] will retry after 385.776543ms: missing components: kube-dns
	I1213 18:18:42.487148    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:42.488119    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:42.542577    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:42.774956    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:42.886745    5650 system_pods.go:86] 19 kube-system pods found
	I1213 18:18:42.886786    5650 system_pods.go:89] "coredns-66bc5c9577-6ct6w" [c6b2d853-3212-44d5-9a75-06889a4d9dfd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 18:18:42.886823    5650 system_pods.go:89] "csi-hostpath-attacher-0" [615f4b0a-9214-4b1d-82ce-6aa31f437ac8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 18:18:42.886840    5650 system_pods.go:89] "csi-hostpath-resizer-0" [b4d59a0b-7aee-4f55-87e5-2d3348509418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 18:18:42.886849    5650 system_pods.go:89] "csi-hostpathplugin-rlkjk" [59f61c6c-d034-49db-9bda-0afcdfb3e18b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 18:18:42.886860    5650 system_pods.go:89] "etcd-addons-377325" [d647e242-ca7d-448b-818e-6dc5efeaa694] Running
	I1213 18:18:42.886866    5650 system_pods.go:89] "kindnet-rtw78" [8e27fa8a-3f82-452a-b22c-b8a04db740b0] Running
	I1213 18:18:42.886872    5650 system_pods.go:89] "kube-apiserver-addons-377325" [5c61ee91-168f-4aad-b57d-39f41f5cb7f0] Running
	I1213 18:18:42.886894    5650 system_pods.go:89] "kube-controller-manager-addons-377325" [9d75c46d-b406-4fdc-bceb-92fec5da3b5c] Running
	I1213 18:18:42.886914    5650 system_pods.go:89] "kube-ingress-dns-minikube" [340a94e2-d09a-452b-99fd-0ac69b9d39dc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 18:18:42.886925    5650 system_pods.go:89] "kube-proxy-m8qkk" [850ee62f-39ba-438b-a2a9-88d3ac38d253] Running
	I1213 18:18:42.886931    5650 system_pods.go:89] "kube-scheduler-addons-377325" [ef2e124c-ca44-4dfc-954b-d57337637342] Running
	I1213 18:18:42.886937    5650 system_pods.go:89] "metrics-server-85b7d694d7-xj9z5" [16a6665b-52ee-4f79-9a95-e9367d750ab1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 18:18:42.886949    5650 system_pods.go:89] "nvidia-device-plugin-daemonset-qfgpv" [0270c6b1-ee5d-4441-ae6f-18e3e0423c29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 18:18:42.886955    5650 system_pods.go:89] "registry-6b586f9694-b6lxz" [e23f899f-6b28-4f63-adbd-2adb36c8f008] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 18:18:42.886961    5650 system_pods.go:89] "registry-creds-764b6fb674-f9qf2" [e714a411-3862-4ffa-a880-421fa8708466] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 18:18:42.886972    5650 system_pods.go:89] "registry-proxy-zxcm2" [e19a41e7-ad9e-4d36-8a5b-cc0fea51183a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 18:18:42.886993    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ddpz" [87a0764e-abbb-468b-b2e5-b23a5e3eeae7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:42.887010    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sl9gg" [3b55e0b4-6d97-445d-9a0f-24d031fdf6a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:42.887017    5650 system_pods.go:89] "storage-provisioner" [034f2b21-7609-45db-a977-8ec33924ac6b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 18:18:42.887037    5650 retry.go:31] will retry after 471.336241ms: missing components: kube-dns
	I1213 18:18:42.993256    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:43.001274    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:43.065337    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:43.274305    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:43.362631    5650 system_pods.go:86] 19 kube-system pods found
	I1213 18:18:43.362668    5650 system_pods.go:89] "coredns-66bc5c9577-6ct6w" [c6b2d853-3212-44d5-9a75-06889a4d9dfd] Running
	I1213 18:18:43.362679    5650 system_pods.go:89] "csi-hostpath-attacher-0" [615f4b0a-9214-4b1d-82ce-6aa31f437ac8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 18:18:43.362722    5650 system_pods.go:89] "csi-hostpath-resizer-0" [b4d59a0b-7aee-4f55-87e5-2d3348509418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 18:18:43.362740    5650 system_pods.go:89] "csi-hostpathplugin-rlkjk" [59f61c6c-d034-49db-9bda-0afcdfb3e18b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 18:18:43.362745    5650 system_pods.go:89] "etcd-addons-377325" [d647e242-ca7d-448b-818e-6dc5efeaa694] Running
	I1213 18:18:43.362750    5650 system_pods.go:89] "kindnet-rtw78" [8e27fa8a-3f82-452a-b22c-b8a04db740b0] Running
	I1213 18:18:43.362755    5650 system_pods.go:89] "kube-apiserver-addons-377325" [5c61ee91-168f-4aad-b57d-39f41f5cb7f0] Running
	I1213 18:18:43.362768    5650 system_pods.go:89] "kube-controller-manager-addons-377325" [9d75c46d-b406-4fdc-bceb-92fec5da3b5c] Running
	I1213 18:18:43.362790    5650 system_pods.go:89] "kube-ingress-dns-minikube" [340a94e2-d09a-452b-99fd-0ac69b9d39dc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 18:18:43.362801    5650 system_pods.go:89] "kube-proxy-m8qkk" [850ee62f-39ba-438b-a2a9-88d3ac38d253] Running
	I1213 18:18:43.362822    5650 system_pods.go:89] "kube-scheduler-addons-377325" [ef2e124c-ca44-4dfc-954b-d57337637342] Running
	I1213 18:18:43.362836    5650 system_pods.go:89] "metrics-server-85b7d694d7-xj9z5" [16a6665b-52ee-4f79-9a95-e9367d750ab1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 18:18:43.362842    5650 system_pods.go:89] "nvidia-device-plugin-daemonset-qfgpv" [0270c6b1-ee5d-4441-ae6f-18e3e0423c29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 18:18:43.362851    5650 system_pods.go:89] "registry-6b586f9694-b6lxz" [e23f899f-6b28-4f63-adbd-2adb36c8f008] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 18:18:43.362862    5650 system_pods.go:89] "registry-creds-764b6fb674-f9qf2" [e714a411-3862-4ffa-a880-421fa8708466] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 18:18:43.362871    5650 system_pods.go:89] "registry-proxy-zxcm2" [e19a41e7-ad9e-4d36-8a5b-cc0fea51183a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 18:18:43.362878    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ddpz" [87a0764e-abbb-468b-b2e5-b23a5e3eeae7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:43.362914    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sl9gg" [3b55e0b4-6d97-445d-9a0f-24d031fdf6a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:43.362928    5650 system_pods.go:89] "storage-provisioner" [034f2b21-7609-45db-a977-8ec33924ac6b] Running
	I1213 18:18:43.362941    5650 system_pods.go:126] duration metric: took 1.238259826s to wait for k8s-apps to be running ...
	I1213 18:18:43.362955    5650 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 18:18:43.363025    5650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 18:18:43.380862    5650 system_svc.go:56] duration metric: took 17.885786ms WaitForService to wait for kubelet
	I1213 18:18:43.380939    5650 kubeadm.go:587] duration metric: took 43.394507038s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 18:18:43.380972    5650 node_conditions.go:102] verifying NodePressure condition ...
	I1213 18:18:43.384344    5650 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 18:18:43.384424    5650 node_conditions.go:123] node cpu capacity is 2
	I1213 18:18:43.384452    5650 node_conditions.go:105] duration metric: took 3.460941ms to run NodePressure ...
	I1213 18:18:43.384479    5650 start.go:242] waiting for startup goroutines ...
	I1213 18:18:43.485469    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:43.485972    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:43.585651    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:43.774862    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:43.982846    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:43.985449    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:44.042933    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:44.273313    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:44.483739    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:44.483941    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:44.537088    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:44.773459    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:44.983783    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:44.984548    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:45.040364    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:45.291213    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:45.482849    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:45.489360    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:45.537153    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:45.774153    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:45.983646    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:45.984117    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:46.037045    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:46.273734    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:46.484083    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:46.484726    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:46.538004    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:46.774383    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:46.983698    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:46.985776    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:47.037136    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:47.273690    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:47.484456    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:47.484733    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:47.536675    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:47.772995    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:47.982429    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:47.984727    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:48.037663    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:48.273321    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:48.482664    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:48.484341    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:48.538247    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:48.773788    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:48.981804    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:48.984179    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:49.037957    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:49.273601    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:49.483655    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:49.484258    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:49.537235    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:49.774066    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:49.982052    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:49.983921    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:50.038802    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:50.274556    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:50.488203    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:50.488329    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:50.537073    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:50.773512    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:50.982243    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:50.984450    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:51.037646    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:51.272845    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:51.483456    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:51.486557    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:51.538465    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:51.774232    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:51.982135    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:51.984533    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:52.037926    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:52.273782    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:52.482840    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:52.485133    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:52.537516    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:52.773281    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:52.982329    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:52.984414    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:53.037763    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:53.272923    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:53.485219    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:53.485588    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:53.537296    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:53.774838    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:53.985224    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:53.985711    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:54.037686    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:54.274236    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:54.482767    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:54.485774    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:54.536972    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:54.774019    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:54.985647    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:54.985785    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:55.043159    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:55.274382    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:55.483938    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:55.484944    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:55.537075    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:55.773548    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:55.984114    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:55.984284    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:56.037381    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:56.274144    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:56.484401    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:56.484571    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:56.537734    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:56.774290    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:56.984231    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:56.984405    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:57.037286    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:57.273431    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:57.482816    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:57.486840    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:57.537173    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:57.774789    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:57.983456    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:57.984784    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:58.036581    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:58.272792    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:58.483274    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:58.484786    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:58.537718    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:58.778219    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:58.986905    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:58.987189    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:59.086575    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:59.282453    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:59.485562    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:59.485909    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:59.537433    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:59.774764    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:59.984947    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:59.985989    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:00.040715    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:00.319929    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:00.482996    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:00.485542    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:00.537971    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:00.783025    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:00.985238    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:00.985350    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:01.037242    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:01.275106    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:01.485703    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:01.486064    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:01.537328    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:01.779420    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:01.985972    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:01.988689    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:02.039439    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:02.274626    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:02.483580    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:02.485042    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:02.536957    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:02.773575    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:02.982954    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:02.984368    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:03.037360    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:03.274306    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:03.483019    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:03.486487    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:03.537832    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:03.774518    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:03.983189    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:03.984205    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:04.037052    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:04.273938    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:04.483386    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:04.485678    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:04.537915    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:04.773647    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:04.983958    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:04.984298    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:05.037566    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:05.274765    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:05.485218    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:05.486039    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:05.536993    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:05.775477    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:05.985454    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:05.985619    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:06.037679    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:06.273210    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:06.483721    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:06.485259    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:06.537599    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:06.774290    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:06.985066    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:06.985396    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:07.037432    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:07.274203    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:07.483857    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:07.485096    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:07.537180    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:07.773626    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:07.984399    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:07.984533    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:08.037707    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:08.273161    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:08.482942    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:08.485787    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:08.537907    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:08.773721    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:08.983202    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:08.984250    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:09.037050    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:09.273821    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:09.482483    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:09.484299    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:09.537547    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:09.774080    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:09.984214    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:09.984672    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:10.037214    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:10.273398    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:10.483839    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:10.484819    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:10.537199    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:10.773932    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:10.982913    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:10.984800    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:11.037944    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:11.273992    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:11.482744    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:11.484437    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:11.537643    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:11.773223    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:11.983816    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:11.984414    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:12.037292    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:12.273570    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:12.484915    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:12.485162    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:12.539974    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:12.773829    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:12.982423    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:12.984676    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:13.037932    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:13.274131    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:13.483747    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:13.485331    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:13.537404    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:13.800454    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:13.982801    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:13.985350    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:14.037293    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:14.273787    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:14.484354    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:14.485533    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:14.538006    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:14.774013    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:14.984795    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:14.985180    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:15.044225    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:15.273657    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:15.490116    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:15.490862    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:15.536686    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:15.777796    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:15.983753    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:15.985384    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:16.086251    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:16.273920    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:16.483728    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:16.485337    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:16.537475    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:16.774842    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:16.983679    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:16.985561    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:17.037560    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:17.275477    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:17.486171    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:17.486388    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:17.537452    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:17.774367    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:17.983379    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:17.986410    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:18.037734    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:18.273758    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:18.482816    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:18.484190    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:18.537661    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:18.773347    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:18.986279    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:18.986562    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:19.047464    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:19.277274    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:19.490891    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:19.491155    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:19.538614    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:19.775934    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:19.986356    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:19.986632    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:20.039986    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:20.274173    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:20.511458    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:20.511910    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:20.543455    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:20.774922    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:20.983107    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:20.985649    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:21.040193    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:21.274070    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:21.482963    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:21.484936    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:21.537357    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:21.773524    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:21.984671    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:21.987581    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:22.084871    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:22.272957    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:22.483594    5650 kapi.go:107] duration metric: took 1m16.00288394s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 18:19:22.483845    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:22.537453    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:22.773772    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:22.982842    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:23.040695    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:23.273505    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:23.483071    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:23.537625    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:23.774063    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:23.982954    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:24.038265    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:24.274086    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:24.482156    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:24.536739    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:24.773740    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:24.981874    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:25.037325    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:25.274015    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:25.482582    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:25.537168    5650 kapi.go:107] duration metric: took 1m15.503372971s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 18:19:25.540551    5650 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-377325 cluster.
	I1213 18:19:25.543278    5650 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 18:19:25.546075    5650 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 18:19:25.773884    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:25.982315    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:26.273826    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:26.482069    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:26.773380    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:26.982824    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:27.273420    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:27.482772    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:27.772829    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:27.981641    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:28.273451    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:28.482886    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:28.772985    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:28.981854    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:29.273481    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:29.482854    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:29.773170    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:29.982449    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:30.272879    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:30.482852    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:30.773718    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:30.982867    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:31.273406    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:31.482734    5650 kapi.go:107] duration metric: took 1m25.003850297s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 18:19:31.773977    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:32.273696    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:32.775115    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:33.273880    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:33.773901    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:34.274855    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:34.773622    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:35.272679    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:35.774270    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:36.274190    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:36.775663    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:37.292300    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:37.774009    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:38.273668    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:38.774034    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:39.274789    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:39.774419    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:40.273527    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:40.807898    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:41.273774    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:41.774065    5650 kapi.go:107] duration metric: took 1m35.004328554s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 18:19:41.777393    5650 out.go:179] * Enabled addons: cloud-spanner, registry-creds, amd-gpu-device-plugin, default-storageclass, storage-provisioner, nvidia-device-plugin, inspektor-gadget, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1213 18:19:41.780333    5650 addons.go:530] duration metric: took 1m41.793448769s for enable addons: enabled=[cloud-spanner registry-creds amd-gpu-device-plugin default-storageclass storage-provisioner nvidia-device-plugin inspektor-gadget ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1213 18:19:41.780401    5650 start.go:247] waiting for cluster config update ...
	I1213 18:19:41.780427    5650 start.go:256] writing updated cluster config ...
	I1213 18:19:41.780747    5650 ssh_runner.go:195] Run: rm -f paused
	I1213 18:19:41.785511    5650 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 18:19:41.788962    5650 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6ct6w" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:41.795078    5650 pod_ready.go:94] pod "coredns-66bc5c9577-6ct6w" is "Ready"
	I1213 18:19:41.795105    5650 pod_ready.go:86] duration metric: took 6.115145ms for pod "coredns-66bc5c9577-6ct6w" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:41.797594    5650 pod_ready.go:83] waiting for pod "etcd-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:41.802306    5650 pod_ready.go:94] pod "etcd-addons-377325" is "Ready"
	I1213 18:19:41.802332    5650 pod_ready.go:86] duration metric: took 4.710369ms for pod "etcd-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:41.804949    5650 pod_ready.go:83] waiting for pod "kube-apiserver-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:41.809733    5650 pod_ready.go:94] pod "kube-apiserver-addons-377325" is "Ready"
	I1213 18:19:41.809762    5650 pod_ready.go:86] duration metric: took 4.786735ms for pod "kube-apiserver-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:41.812359    5650 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:42.190858    5650 pod_ready.go:94] pod "kube-controller-manager-addons-377325" is "Ready"
	I1213 18:19:42.190893    5650 pod_ready.go:86] duration metric: took 378.506551ms for pod "kube-controller-manager-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:42.390401    5650 pod_ready.go:83] waiting for pod "kube-proxy-m8qkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:42.789154    5650 pod_ready.go:94] pod "kube-proxy-m8qkk" is "Ready"
	I1213 18:19:42.789224    5650 pod_ready.go:86] duration metric: took 398.795001ms for pod "kube-proxy-m8qkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:42.989391    5650 pod_ready.go:83] waiting for pod "kube-scheduler-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:43.389684    5650 pod_ready.go:94] pod "kube-scheduler-addons-377325" is "Ready"
	I1213 18:19:43.389718    5650 pod_ready.go:86] duration metric: took 400.257469ms for pod "kube-scheduler-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:43.389731    5650 pod_ready.go:40] duration metric: took 1.604186065s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 18:19:43.783034    5650 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 18:19:43.794233    5650 out.go:179] * Done! kubectl is now configured to use "addons-377325" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 18:22:24 addons-377325 crio[833]: time="2025-12-13T18:22:24.241109162Z" level=info msg="Removed container a6a159c835f62a57b63c859c9aec315ff4fefd937b2985dc2ae49e9cbb9becd6: kube-system/registry-creds-764b6fb674-f9qf2/registry-creds" id=2d6cfc2a-7b18-417c-ad91-c0ca0a104327 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 18:22:43 addons-377325 crio[833]: time="2025-12-13T18:22:43.378296859Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-lsxsm/POD" id=7871a09d-1100-4871-9d7e-dc6ce1a2c320 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 18:22:43 addons-377325 crio[833]: time="2025-12-13T18:22:43.378371567Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 18:22:43 addons-377325 crio[833]: time="2025-12-13T18:22:43.389896348Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-lsxsm Namespace:default ID:1b3f81205d701df2987e9b76276892d9ef4c15fb8824802791bd5a534a6b1b37 UID:b13a0885-929f-46b2-a79c-b44a158d3892 NetNS:/var/run/netns/3a63bc85-e4e2-434c-83be-1cc7c37abf9f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001ccee50}] Aliases:map[]}"
	Dec 13 18:22:43 addons-377325 crio[833]: time="2025-12-13T18:22:43.390088104Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-lsxsm to CNI network \"kindnet\" (type=ptp)"
	Dec 13 18:22:43 addons-377325 crio[833]: time="2025-12-13T18:22:43.410634355Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-lsxsm Namespace:default ID:1b3f81205d701df2987e9b76276892d9ef4c15fb8824802791bd5a534a6b1b37 UID:b13a0885-929f-46b2-a79c-b44a158d3892 NetNS:/var/run/netns/3a63bc85-e4e2-434c-83be-1cc7c37abf9f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001ccee50}] Aliases:map[]}"
	Dec 13 18:22:43 addons-377325 crio[833]: time="2025-12-13T18:22:43.410791092Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-lsxsm for CNI network kindnet (type=ptp)"
	Dec 13 18:22:43 addons-377325 crio[833]: time="2025-12-13T18:22:43.416038083Z" level=info msg="Ran pod sandbox 1b3f81205d701df2987e9b76276892d9ef4c15fb8824802791bd5a534a6b1b37 with infra container: default/hello-world-app-5d498dc89-lsxsm/POD" id=7871a09d-1100-4871-9d7e-dc6ce1a2c320 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 18:22:43 addons-377325 crio[833]: time="2025-12-13T18:22:43.419603075Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ae845598-7f01-4b20-b9a3-58de6e47d095 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:22:43 addons-377325 crio[833]: time="2025-12-13T18:22:43.419723889Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=ae845598-7f01-4b20-b9a3-58de6e47d095 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:22:43 addons-377325 crio[833]: time="2025-12-13T18:22:43.419759221Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=ae845598-7f01-4b20-b9a3-58de6e47d095 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:22:43 addons-377325 crio[833]: time="2025-12-13T18:22:43.420581057Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=ad758836-cf3c-40ca-b524-639e3ea5a054 name=/runtime.v1.ImageService/PullImage
	Dec 13 18:22:43 addons-377325 crio[833]: time="2025-12-13T18:22:43.423271322Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 13 18:22:44 addons-377325 crio[833]: time="2025-12-13T18:22:44.03487887Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=ad758836-cf3c-40ca-b524-639e3ea5a054 name=/runtime.v1.ImageService/PullImage
	Dec 13 18:22:44 addons-377325 crio[833]: time="2025-12-13T18:22:44.03555535Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f249744a-3660-46ec-8d80-a11eec23e60f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:22:44 addons-377325 crio[833]: time="2025-12-13T18:22:44.037690253Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c5ce2dd6-94f3-41b5-b81e-9e55749b5ba9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:22:44 addons-377325 crio[833]: time="2025-12-13T18:22:44.046546602Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-lsxsm/hello-world-app" id=0772528b-eee6-4c9b-8204-e7afff5a643f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 18:22:44 addons-377325 crio[833]: time="2025-12-13T18:22:44.046675933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 18:22:44 addons-377325 crio[833]: time="2025-12-13T18:22:44.054582444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 18:22:44 addons-377325 crio[833]: time="2025-12-13T18:22:44.054909283Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c788af4e799d9762e4561e3dde2b3cf59fc779ecdfb5f93152963f1f1be7c2e3/merged/etc/passwd: no such file or directory"
	Dec 13 18:22:44 addons-377325 crio[833]: time="2025-12-13T18:22:44.055009313Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c788af4e799d9762e4561e3dde2b3cf59fc779ecdfb5f93152963f1f1be7c2e3/merged/etc/group: no such file or directory"
	Dec 13 18:22:44 addons-377325 crio[833]: time="2025-12-13T18:22:44.05535299Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 18:22:44 addons-377325 crio[833]: time="2025-12-13T18:22:44.0758042Z" level=info msg="Created container 91ad1b6472c875a6952739c4f6c1593682ede880b9f5459471e2b25a08a1c1a8: default/hello-world-app-5d498dc89-lsxsm/hello-world-app" id=0772528b-eee6-4c9b-8204-e7afff5a643f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 18:22:44 addons-377325 crio[833]: time="2025-12-13T18:22:44.078895406Z" level=info msg="Starting container: 91ad1b6472c875a6952739c4f6c1593682ede880b9f5459471e2b25a08a1c1a8" id=9f9b2286-e52b-448a-ac20-3e780b0d1f38 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 18:22:44 addons-377325 crio[833]: time="2025-12-13T18:22:44.083204014Z" level=info msg="Started container" PID=6961 containerID=91ad1b6472c875a6952739c4f6c1593682ede880b9f5459471e2b25a08a1c1a8 description=default/hello-world-app-5d498dc89-lsxsm/hello-world-app id=9f9b2286-e52b-448a-ac20-3e780b0d1f38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b3f81205d701df2987e9b76276892d9ef4c15fb8824802791bd5a534a6b1b37
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	91ad1b6472c87       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   1b3f81205d701       hello-world-app-5d498dc89-lsxsm             default
	de7e7fcdc108c       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             21 seconds ago           Exited              registry-creds                           4                   f35f81f365a78       registry-creds-764b6fb674-f9qf2             kube-system
	e133867c2d300       10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4                                                                             2 minutes ago            Running             nginx                                    0                   6adca4898bf8a       nginx                                       default
	9ef1f75ce2915       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   1c153b6570c0d       busybox                                     default
	42d706a88ed1b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   4432841cc128d       csi-hostpathplugin-rlkjk                    kube-system
	1c4f8a1dece34       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   4432841cc128d       csi-hostpathplugin-rlkjk                    kube-system
	f361bc25cf32b       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   4432841cc128d       csi-hostpathplugin-rlkjk                    kube-system
	352bbc3896f30       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   4432841cc128d       csi-hostpathplugin-rlkjk                    kube-system
	6e046f2674c0c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   eb3aa5538db7e       gadget-btw98                                gadget
	2e7fb6d0ca7ac       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   4432841cc128d       csi-hostpathplugin-rlkjk                    kube-system
	8554f7b9834d4       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   a51c674e698f8       ingress-nginx-controller-85d4c799dd-422pz   ingress-nginx
	18b8bafbb6153       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   b7540fb4e90f8       gcp-auth-78565c9fb4-t8vr8                   gcp-auth
	cd33fc9243f51       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   bc05f87b0190e       registry-proxy-zxcm2                        kube-system
	3946c9e84e3da       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   d3ab1879b99df       metrics-server-85b7d694d7-xj9z5             kube-system
	98b036da9d856       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   06b83f1edd870       cloud-spanner-emulator-5bdddb765-vxvnq      default
	054c83d5a1f87       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   48b2dba852e46       csi-hostpath-resizer-0                      kube-system
	87610c2eb50cf       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   da9fb2130cdef       nvidia-device-plugin-daemonset-qfgpv        kube-system
	52764c4f81789       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   73e04023318a9       registry-6b586f9694-b6lxz                   kube-system
	0a800ad4dd0e9       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   77ee3184ef830       snapshot-controller-7d9fbc56b8-4ddpz        kube-system
	eb2db55011acb       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   344cfe54e76ab       local-path-provisioner-648f6765c9-pgkvr     local-path-storage
	7dddc3bceec5a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   4432841cc128d       csi-hostpathplugin-rlkjk                    kube-system
	599b8ce504818       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   d381af8b77d0d       snapshot-controller-7d9fbc56b8-sl9gg        kube-system
	5abc99c42c2ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              patch                                    0                   8e26ec8a10e0b       ingress-nginx-admission-patch-sqd6d         ingress-nginx
	228e6f9a0fded       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   3c4e107707b77       kube-ingress-dns-minikube                   kube-system
	388dcec00ffb8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              create                                   0                   b7e6f5e285471       ingress-nginx-admission-create-bhb5h        ingress-nginx
	0d77a566cb2c6       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   8f085c673c00a       csi-hostpath-attacher-0                     kube-system
	4ff2b97ce30ed       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   988c876e76055       yakd-dashboard-5ff678cb9-4g4kw              yakd-dashboard
	dae0269172396       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   cadf8295cc7d2       storage-provisioner                         kube-system
	c37b9bf999a3f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   90c831d769b08       coredns-66bc5c9577-6ct6w                    kube-system
	57a4c5bd3b052       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   4fd28f9ac87c8       kube-proxy-m8qkk                            kube-system
	05178b358a31f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   8cb043565585f       kindnet-rtw78                               kube-system
	4c0b427c73b3b       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             4 minutes ago            Running             kube-scheduler                           0                   d7265bd2288d4       kube-scheduler-addons-377325                kube-system
	003f9ee38f6b4       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             4 minutes ago            Running             kube-controller-manager                  0                   e49a21d77735d       kube-controller-manager-addons-377325       kube-system
	9f44e406e70a4       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             4 minutes ago            Running             etcd                                     0                   176c3b729eff9       etcd-addons-377325                          kube-system
	3edde11a7e903       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             4 minutes ago            Running             kube-apiserver                           0                   d54df531fe9b8       kube-apiserver-addons-377325                kube-system
	
	
	==> coredns [c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee] <==
	[INFO] 10.244.0.14:36238 - 62493 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002835186s
	[INFO] 10.244.0.14:36238 - 58385 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000130545s
	[INFO] 10.244.0.14:36238 - 4890 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000177988s
	[INFO] 10.244.0.14:48046 - 27896 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000159682s
	[INFO] 10.244.0.14:48046 - 27427 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000137938s
	[INFO] 10.244.0.14:37118 - 18009 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099628s
	[INFO] 10.244.0.14:37118 - 17788 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000187563s
	[INFO] 10.244.0.14:54220 - 46876 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000110812s
	[INFO] 10.244.0.14:54220 - 46647 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000200273s
	[INFO] 10.244.0.14:47250 - 36227 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001606132s
	[INFO] 10.244.0.14:47250 - 36022 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001543954s
	[INFO] 10.244.0.14:45882 - 37628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00013697s
	[INFO] 10.244.0.14:45882 - 37208 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000075291s
	[INFO] 10.244.0.20:55855 - 8897 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000165303s
	[INFO] 10.244.0.20:33910 - 20049 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000080108s
	[INFO] 10.244.0.20:52805 - 31624 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096468s
	[INFO] 10.244.0.20:33823 - 9905 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000073371s
	[INFO] 10.244.0.20:40579 - 48271 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096559s
	[INFO] 10.244.0.20:46652 - 56081 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000071s
	[INFO] 10.244.0.20:38864 - 53583 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002023918s
	[INFO] 10.244.0.20:59060 - 22171 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001739411s
	[INFO] 10.244.0.20:35794 - 42123 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.009097607s
	[INFO] 10.244.0.20:42467 - 30138 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.008140943s
	[INFO] 10.244.0.23:35278 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000207978s
	[INFO] 10.244.0.23:34560 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000156934s
	
	
	==> describe nodes <==
	Name:               addons-377325
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-377325
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=addons-377325
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T18_17_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-377325
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-377325"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 18:17:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-377325
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 18:22:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 18:21:30 +0000   Sat, 13 Dec 2025 18:17:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 18:21:30 +0000   Sat, 13 Dec 2025 18:17:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 18:21:30 +0000   Sat, 13 Dec 2025 18:17:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 18:21:30 +0000   Sat, 13 Dec 2025 18:18:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-377325
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                5b0dab5a-1c6e-44ba-8710-19d123f14c68
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  default                     cloud-spanner-emulator-5bdddb765-vxvnq       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  default                     hello-world-app-5d498dc89-lsxsm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-btw98                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  gcp-auth                    gcp-auth-78565c9fb4-t8vr8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-422pz    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m39s
	  kube-system                 coredns-66bc5c9577-6ct6w                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m45s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 csi-hostpathplugin-rlkjk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 etcd-addons-377325                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m52s
	  kube-system                 kindnet-rtw78                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m46s
	  kube-system                 kube-apiserver-addons-377325                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-controller-manager-addons-377325        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-proxy-m8qkk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-scheduler-addons-377325                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 metrics-server-85b7d694d7-xj9z5              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m40s
	  kube-system                 nvidia-device-plugin-daemonset-qfgpv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 registry-6b586f9694-b6lxz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 registry-creds-764b6fb674-f9qf2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 registry-proxy-zxcm2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 snapshot-controller-7d9fbc56b8-4ddpz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 snapshot-controller-7d9fbc56b8-sl9gg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  local-path-storage          local-path-provisioner-648f6765c9-pgkvr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-4g4kw               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m43s  kube-proxy       
	  Normal   Starting                 4m51s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m51s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m51s  kubelet          Node addons-377325 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m51s  kubelet          Node addons-377325 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m51s  kubelet          Node addons-377325 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m46s  node-controller  Node addons-377325 event: Registered Node addons-377325 in Controller
	  Normal   NodeReady                4m4s   kubelet          Node addons-377325 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0] <==
	{"level":"warn","ts":"2025-12-13T18:17:50.876713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.887073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.905286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.922096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.939137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.956980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.974506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.994464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.009085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.034541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.050967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.064639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.087235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.106405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.163188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.207117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.222412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.247196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.309621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:18:07.057661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:18:07.073706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:18:29.173439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:18:29.191066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:18:29.229334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:18:29.241413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39040","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [18b8bafbb61534744f02d69567b274020e1f069ec20206f130bb2a96bbfd9099] <==
	2025/12/13 18:19:25 GCP Auth Webhook started!
	2025/12/13 18:19:44 Ready to marshal response ...
	2025/12/13 18:19:44 Ready to write response ...
	2025/12/13 18:19:44 Ready to marshal response ...
	2025/12/13 18:19:44 Ready to write response ...
	2025/12/13 18:19:44 Ready to marshal response ...
	2025/12/13 18:19:44 Ready to write response ...
	2025/12/13 18:20:06 Ready to marshal response ...
	2025/12/13 18:20:06 Ready to write response ...
	2025/12/13 18:20:11 Ready to marshal response ...
	2025/12/13 18:20:11 Ready to write response ...
	2025/12/13 18:20:24 Ready to marshal response ...
	2025/12/13 18:20:24 Ready to write response ...
	2025/12/13 18:20:31 Ready to marshal response ...
	2025/12/13 18:20:31 Ready to write response ...
	2025/12/13 18:20:40 Ready to marshal response ...
	2025/12/13 18:20:40 Ready to write response ...
	2025/12/13 18:20:40 Ready to marshal response ...
	2025/12/13 18:20:40 Ready to write response ...
	2025/12/13 18:20:48 Ready to marshal response ...
	2025/12/13 18:20:48 Ready to write response ...
	2025/12/13 18:22:43 Ready to marshal response ...
	2025/12/13 18:22:43 Ready to write response ...
	
	
	==> kernel <==
	 18:22:45 up  1:05,  0 user,  load average: 0.57, 1.03, 0.56
	Linux addons-377325 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a] <==
	I1213 18:20:41.319643       1 main.go:301] handling current node
	I1213 18:20:51.319606       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:20:51.319673       1 main.go:301] handling current node
	I1213 18:21:01.319576       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:21:01.319610       1 main.go:301] handling current node
	I1213 18:21:11.319877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:21:11.319916       1 main.go:301] handling current node
	I1213 18:21:21.321714       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:21:21.321755       1 main.go:301] handling current node
	I1213 18:21:31.319599       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:21:31.319660       1 main.go:301] handling current node
	I1213 18:21:41.325128       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:21:41.325186       1 main.go:301] handling current node
	I1213 18:21:51.325460       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:21:51.325572       1 main.go:301] handling current node
	I1213 18:22:01.319612       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:22:01.319651       1 main.go:301] handling current node
	I1213 18:22:11.326006       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:22:11.326061       1 main.go:301] handling current node
	I1213 18:22:21.327031       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:22:21.327068       1 main.go:301] handling current node
	I1213 18:22:31.324552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:22:31.324585       1 main.go:301] handling current node
	I1213 18:22:41.320581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:22:41.320616       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33] <==
	E1213 18:18:41.845206       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.109.137:443: connect: connection refused" logger="UnhandledError"
	W1213 18:19:05.849952       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 18:19:05.850006       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 18:19:05.850026       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 18:19:05.851133       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 18:19:05.851287       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 18:19:05.851303       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 18:19:20.442961       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 18:19:20.443026       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1213 18:19:20.443497       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.132.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.132.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.132.8:443: connect: connection refused" logger="UnhandledError"
	E1213 18:19:20.444691       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.132.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.132.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.132.8:443: connect: connection refused" logger="UnhandledError"
	E1213 18:19:20.450163       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.132.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.132.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.132.8:443: connect: connection refused" logger="UnhandledError"
	I1213 18:19:20.578887       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 18:19:54.117536       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42996: use of closed network connection
	E1213 18:19:54.482076       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43034: use of closed network connection
	I1213 18:20:20.677503       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1213 18:20:24.148174       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 18:20:24.480723       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.133.57"}
	I1213 18:22:43.281034       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.117.93"}
	
	
	==> kube-controller-manager [003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c] <==
	I1213 18:17:59.204328       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 18:17:59.204410       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 18:17:59.204674       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 18:17:59.204798       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 18:17:59.204838       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 18:17:59.207751       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 18:17:59.207961       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 18:17:59.208445       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 18:17:59.208924       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 18:17:59.212216       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 18:17:59.214345       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 18:17:59.219550       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 18:17:59.230907       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 18:17:59.233123       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	E1213 18:18:05.123938       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1213 18:18:29.165946       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 18:18:29.166097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1213 18:18:29.166136       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1213 18:18:29.193894       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1213 18:18:29.216351       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 18:18:29.266731       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 18:18:29.321586       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 18:18:44.155535       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1213 18:18:59.279152       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 18:18:59.330769       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338] <==
	I1213 18:18:01.112133       1 server_linux.go:53] "Using iptables proxy"
	I1213 18:18:01.194915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 18:18:01.295090       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 18:18:01.295122       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 18:18:01.295190       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 18:18:01.342579       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 18:18:01.342633       1 server_linux.go:132] "Using iptables Proxier"
	I1213 18:18:01.352979       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 18:18:01.364478       1 server.go:527] "Version info" version="v1.34.2"
	I1213 18:18:01.364504       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 18:18:01.387438       1 config.go:200] "Starting service config controller"
	I1213 18:18:01.387461       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 18:18:01.387543       1 config.go:106] "Starting endpoint slice config controller"
	I1213 18:18:01.387550       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 18:18:01.387565       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 18:18:01.387570       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 18:18:01.388618       1 config.go:309] "Starting node config controller"
	I1213 18:18:01.388630       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 18:18:01.388637       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 18:18:01.487801       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 18:18:01.487843       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 18:18:01.487882       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892] <==
	E1213 18:17:52.294117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 18:17:52.294171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 18:17:52.294230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 18:17:52.294271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 18:17:52.294317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 18:17:52.294361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 18:17:52.294437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 18:17:52.294460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 18:17:52.294505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 18:17:52.294588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 18:17:52.294588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 18:17:52.294651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 18:17:52.294655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 18:17:52.294703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 18:17:52.294790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 18:17:52.294874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 18:17:53.123418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 18:17:53.146543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 18:17:53.146786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 18:17:53.158421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 18:17:53.172795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 18:17:53.240887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 18:17:53.243072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 18:17:53.284644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1213 18:17:55.663649       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 18:21:43 addons-377325 kubelet[1270]: I1213 18:21:43.044384    1270 scope.go:117] "RemoveContainer" containerID="de87caaaee4010cb89e6e000edb6b59d54f726d94a6414239673f5958652a7f2"
	Dec 13 18:21:43 addons-377325 kubelet[1270]: I1213 18:21:43.044664    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-f9qf2" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 18:21:43 addons-377325 kubelet[1270]: I1213 18:21:43.044714    1270 scope.go:117] "RemoveContainer" containerID="a6a159c835f62a57b63c859c9aec315ff4fefd937b2985dc2ae49e9cbb9becd6"
	Dec 13 18:21:43 addons-377325 kubelet[1270]: E1213 18:21:43.045204    1270 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-f9qf2_kube-system(e714a411-3862-4ffa-a880-421fa8708466)\"" pod="kube-system/registry-creds-764b6fb674-f9qf2" podUID="e714a411-3862-4ffa-a880-421fa8708466"
	Dec 13 18:21:46 addons-377325 kubelet[1270]: I1213 18:21:46.722542    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zxcm2" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 18:21:55 addons-377325 kubelet[1270]: I1213 18:21:55.722215    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-qfgpv" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 18:21:55 addons-377325 kubelet[1270]: I1213 18:21:55.722386    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-f9qf2" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 18:21:55 addons-377325 kubelet[1270]: I1213 18:21:55.723368    1270 scope.go:117] "RemoveContainer" containerID="a6a159c835f62a57b63c859c9aec315ff4fefd937b2985dc2ae49e9cbb9becd6"
	Dec 13 18:21:55 addons-377325 kubelet[1270]: E1213 18:21:55.723587    1270 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-f9qf2_kube-system(e714a411-3862-4ffa-a880-421fa8708466)\"" pod="kube-system/registry-creds-764b6fb674-f9qf2" podUID="e714a411-3862-4ffa-a880-421fa8708466"
	Dec 13 18:22:08 addons-377325 kubelet[1270]: I1213 18:22:08.722942    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-f9qf2" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 18:22:08 addons-377325 kubelet[1270]: I1213 18:22:08.723525    1270 scope.go:117] "RemoveContainer" containerID="a6a159c835f62a57b63c859c9aec315ff4fefd937b2985dc2ae49e9cbb9becd6"
	Dec 13 18:22:08 addons-377325 kubelet[1270]: E1213 18:22:08.723784    1270 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-f9qf2_kube-system(e714a411-3862-4ffa-a880-421fa8708466)\"" pod="kube-system/registry-creds-764b6fb674-f9qf2" podUID="e714a411-3862-4ffa-a880-421fa8708466"
	Dec 13 18:22:23 addons-377325 kubelet[1270]: I1213 18:22:23.722886    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-f9qf2" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 18:22:23 addons-377325 kubelet[1270]: I1213 18:22:23.723423    1270 scope.go:117] "RemoveContainer" containerID="a6a159c835f62a57b63c859c9aec315ff4fefd937b2985dc2ae49e9cbb9becd6"
	Dec 13 18:22:24 addons-377325 kubelet[1270]: I1213 18:22:24.210292    1270 scope.go:117] "RemoveContainer" containerID="a6a159c835f62a57b63c859c9aec315ff4fefd937b2985dc2ae49e9cbb9becd6"
	Dec 13 18:22:24 addons-377325 kubelet[1270]: I1213 18:22:24.210635    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-f9qf2" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 18:22:24 addons-377325 kubelet[1270]: I1213 18:22:24.210680    1270 scope.go:117] "RemoveContainer" containerID="de7e7fcdc108c798b0de80dd4a0903882db15275bc7e868c746f3d1bc5f27b30"
	Dec 13 18:22:24 addons-377325 kubelet[1270]: E1213 18:22:24.210827    1270 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-f9qf2_kube-system(e714a411-3862-4ffa-a880-421fa8708466)\"" pod="kube-system/registry-creds-764b6fb674-f9qf2" podUID="e714a411-3862-4ffa-a880-421fa8708466"
	Dec 13 18:22:37 addons-377325 kubelet[1270]: I1213 18:22:37.722798    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-f9qf2" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 18:22:37 addons-377325 kubelet[1270]: I1213 18:22:37.723328    1270 scope.go:117] "RemoveContainer" containerID="de7e7fcdc108c798b0de80dd4a0903882db15275bc7e868c746f3d1bc5f27b30"
	Dec 13 18:22:37 addons-377325 kubelet[1270]: E1213 18:22:37.723553    1270 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-f9qf2_kube-system(e714a411-3862-4ffa-a880-421fa8708466)\"" pod="kube-system/registry-creds-764b6fb674-f9qf2" podUID="e714a411-3862-4ffa-a880-421fa8708466"
	Dec 13 18:22:41 addons-377325 kubelet[1270]: I1213 18:22:41.722274    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-b6lxz" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 18:22:43 addons-377325 kubelet[1270]: I1213 18:22:43.076836    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq6bz\" (UniqueName: \"kubernetes.io/projected/b13a0885-929f-46b2-a79c-b44a158d3892-kube-api-access-cq6bz\") pod \"hello-world-app-5d498dc89-lsxsm\" (UID: \"b13a0885-929f-46b2-a79c-b44a158d3892\") " pod="default/hello-world-app-5d498dc89-lsxsm"
	Dec 13 18:22:43 addons-377325 kubelet[1270]: I1213 18:22:43.076948    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b13a0885-929f-46b2-a79c-b44a158d3892-gcp-creds\") pod \"hello-world-app-5d498dc89-lsxsm\" (UID: \"b13a0885-929f-46b2-a79c-b44a158d3892\") " pod="default/hello-world-app-5d498dc89-lsxsm"
	Dec 13 18:22:44 addons-377325 kubelet[1270]: I1213 18:22:44.313697    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-lsxsm" podStartSLOduration=0.69718053 podStartE2EDuration="1.313676853s" podCreationTimestamp="2025-12-13 18:22:43 +0000 UTC" firstStartedPulling="2025-12-13 18:22:43.420012319 +0000 UTC m=+288.811357522" lastFinishedPulling="2025-12-13 18:22:44.036508634 +0000 UTC m=+289.427853845" observedRunningTime="2025-12-13 18:22:44.30920675 +0000 UTC m=+289.700551961" watchObservedRunningTime="2025-12-13 18:22:44.313676853 +0000 UTC m=+289.705022056"
	
	
	==> storage-provisioner [dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45] <==
	W1213 18:22:20.364310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:22.367062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:22.371439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:24.374978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:24.381788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:26.385267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:26.389628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:28.392224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:28.398781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:30.401649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:30.406298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:32.409149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:32.413355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:34.416539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:34.420928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:36.423837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:36.428755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:38.431649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:38.438388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:40.441565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:40.445822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:42.448622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:42.453794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:44.459475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:22:44.464286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-377325 -n addons-377325
helpers_test.go:270: (dbg) Run:  kubectl --context addons-377325 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-bhb5h ingress-nginx-admission-patch-sqd6d
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-377325 describe pod ingress-nginx-admission-create-bhb5h ingress-nginx-admission-patch-sqd6d
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-377325 describe pod ingress-nginx-admission-create-bhb5h ingress-nginx-admission-patch-sqd6d: exit status 1 (96.56894ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bhb5h" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sqd6d" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-377325 describe pod ingress-nginx-admission-create-bhb5h ingress-nginx-admission-patch-sqd6d: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (271.095905ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:22:46.672891   15126 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:22:46.673133   15126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:22:46.673149   15126 out.go:374] Setting ErrFile to fd 2...
	I1213 18:22:46.673155   15126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:22:46.673457   15126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:22:46.673785   15126 mustload.go:66] Loading cluster: addons-377325
	I1213 18:22:46.674210   15126 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:22:46.674229   15126 addons.go:622] checking whether the cluster is paused
	I1213 18:22:46.674400   15126 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:22:46.674439   15126 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:22:46.675185   15126 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:22:46.692986   15126 ssh_runner.go:195] Run: systemctl --version
	I1213 18:22:46.693094   15126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:22:46.711479   15126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:22:46.821065   15126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:22:46.821161   15126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:22:46.851486   15126 cri.go:89] found id: "de7e7fcdc108c798b0de80dd4a0903882db15275bc7e868c746f3d1bc5f27b30"
	I1213 18:22:46.851510   15126 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:22:46.851520   15126 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:22:46.851524   15126 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:22:46.851527   15126 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:22:46.851531   15126 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:22:46.851534   15126 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:22:46.851537   15126 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:22:46.851540   15126 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:22:46.851546   15126 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:22:46.851549   15126 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:22:46.851552   15126 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:22:46.851555   15126 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:22:46.851558   15126 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:22:46.851561   15126 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:22:46.851566   15126 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:22:46.851574   15126 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:22:46.851578   15126 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:22:46.851581   15126 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:22:46.851584   15126 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:22:46.851588   15126 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:22:46.851591   15126 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:22:46.851595   15126 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:22:46.851598   15126 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:22:46.851602   15126 cri.go:89] found id: ""
	I1213 18:22:46.851651   15126 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:22:46.866888   15126 out.go:203] 
	W1213 18:22:46.869963   15126 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:22:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:22:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:22:46.869986   15126 out.go:285] * 
	* 
	W1213 18:22:46.873969   15126 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:22:46.876899   15126 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable ingress --alsologtostderr -v=1: exit status 11 (271.099376ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:22:46.941175   15169 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:22:46.941325   15169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:22:46.941337   15169 out.go:374] Setting ErrFile to fd 2...
	I1213 18:22:46.941343   15169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:22:46.941577   15169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:22:46.941854   15169 mustload.go:66] Loading cluster: addons-377325
	I1213 18:22:46.942270   15169 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:22:46.942288   15169 addons.go:622] checking whether the cluster is paused
	I1213 18:22:46.942393   15169 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:22:46.942410   15169 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:22:46.942923   15169 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:22:46.960892   15169 ssh_runner.go:195] Run: systemctl --version
	I1213 18:22:46.960956   15169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:22:46.978389   15169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:22:47.092166   15169 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:22:47.092277   15169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:22:47.122016   15169 cri.go:89] found id: "de7e7fcdc108c798b0de80dd4a0903882db15275bc7e868c746f3d1bc5f27b30"
	I1213 18:22:47.122048   15169 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:22:47.122053   15169 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:22:47.122057   15169 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:22:47.122060   15169 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:22:47.122064   15169 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:22:47.122067   15169 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:22:47.122070   15169 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:22:47.122074   15169 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:22:47.122092   15169 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:22:47.122097   15169 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:22:47.122100   15169 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:22:47.122104   15169 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:22:47.122107   15169 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:22:47.122111   15169 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:22:47.122119   15169 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:22:47.122125   15169 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:22:47.122130   15169 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:22:47.122133   15169 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:22:47.122136   15169 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:22:47.122141   15169 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:22:47.122146   15169 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:22:47.122149   15169 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:22:47.122153   15169 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:22:47.122170   15169 cri.go:89] found id: ""
	I1213 18:22:47.122229   15169 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:22:47.137258   15169 out.go:203] 
	W1213 18:22:47.140230   15169 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:22:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:22:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:22:47.140250   15169 out.go:285] * 
	* 
	W1213 18:22:47.144048   15169 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:22:47.147036   15169 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.34s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-btw98" [7a7f337d-948b-49dd-b425-d10f71648b96] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004110065s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (274.35986ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:20:23.597240   12809 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:20:23.597433   12809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:23.597446   12809 out.go:374] Setting ErrFile to fd 2...
	I1213 18:20:23.597452   12809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:23.597755   12809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:20:23.598093   12809 mustload.go:66] Loading cluster: addons-377325
	I1213 18:20:23.598530   12809 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:23.598552   12809 addons.go:622] checking whether the cluster is paused
	I1213 18:20:23.598735   12809 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:23.598754   12809 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:20:23.599279   12809 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:20:23.616849   12809 ssh_runner.go:195] Run: systemctl --version
	I1213 18:20:23.616902   12809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:20:23.642077   12809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:20:23.748582   12809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:20:23.748719   12809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:20:23.782766   12809 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:20:23.782786   12809 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:20:23.782791   12809 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:20:23.782795   12809 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:20:23.782798   12809 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:20:23.782802   12809 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:20:23.782813   12809 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:20:23.782821   12809 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:20:23.782825   12809 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:20:23.782835   12809 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:20:23.782838   12809 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:20:23.782845   12809 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:20:23.782848   12809 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:20:23.782851   12809 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:20:23.782854   12809 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:20:23.782862   12809 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:20:23.782869   12809 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:20:23.782874   12809 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:20:23.782877   12809 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:20:23.782880   12809 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:20:23.782885   12809 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:20:23.782888   12809 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:20:23.782891   12809 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:20:23.782894   12809 cri.go:89] found id: ""
	I1213 18:20:23.782959   12809 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:20:23.798626   12809 out.go:203] 
	W1213 18:20:23.801514   12809 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:20:23.801542   12809 out.go:285] * 
	* 
	W1213 18:20:23.805460   12809 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:20:23.808531   12809 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 15.687457ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-xj9z5" [16a6665b-52ee-4f79-9a95-e9367d750ab1] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003358683s
addons_test.go:465: (dbg) Run:  kubectl --context addons-377325 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (262.063435ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:20:17.327420   12691 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:20:17.327683   12691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:17.327697   12691 out.go:374] Setting ErrFile to fd 2...
	I1213 18:20:17.327702   12691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:17.328065   12691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:20:17.328518   12691 mustload.go:66] Loading cluster: addons-377325
	I1213 18:20:17.328934   12691 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:17.328954   12691 addons.go:622] checking whether the cluster is paused
	I1213 18:20:17.329126   12691 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:17.329146   12691 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:20:17.329754   12691 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:20:17.348158   12691 ssh_runner.go:195] Run: systemctl --version
	I1213 18:20:17.348223   12691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:20:17.368591   12691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:20:17.471503   12691 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:20:17.471638   12691 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:20:17.500444   12691 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:20:17.500463   12691 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:20:17.500467   12691 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:20:17.500471   12691 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:20:17.500474   12691 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:20:17.500478   12691 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:20:17.500481   12691 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:20:17.500485   12691 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:20:17.500488   12691 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:20:17.500497   12691 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:20:17.500500   12691 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:20:17.500507   12691 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:20:17.500510   12691 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:20:17.500513   12691 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:20:17.500516   12691 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:20:17.500524   12691 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:20:17.500527   12691 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:20:17.500532   12691 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:20:17.500535   12691 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:20:17.500538   12691 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:20:17.500543   12691 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:20:17.500546   12691 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:20:17.500549   12691 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:20:17.500552   12691 cri.go:89] found id: ""
	I1213 18:20:17.500602   12691 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:20:17.515299   12691 out.go:203] 
	W1213 18:20:17.518335   12691 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:20:17.518376   12691 out.go:285] * 
	* 
	W1213 18:20:17.522106   12691 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:20:17.525150   12691 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 18:19:58.162657    4637 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 18:19:58.167084    4637 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 18:19:58.167121    4637 kapi.go:107] duration metric: took 4.474378ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 4.485997ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-377325 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-377325 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [8eb76480-7d5e-4b20-a049-11bfc50ab64b] Pending
helpers_test.go:353: "task-pv-pod" [8eb76480-7d5e-4b20-a049-11bfc50ab64b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [8eb76480-7d5e-4b20-a049-11bfc50ab64b] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.005057883s
addons_test.go:574: (dbg) Run:  kubectl --context addons-377325 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-377325 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-377325 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-377325 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-377325 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-377325 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-377325 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [3d5b226c-7d29-47ae-8145-b3c95d2fa5af] Pending
helpers_test.go:353: "task-pv-pod-restore" [3d5b226c-7d29-47ae-8145-b3c95d2fa5af] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003847518s
addons_test.go:616: (dbg) Run:  kubectl --context addons-377325 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-377325 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-377325 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (275.951209ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:20:39.228490   13332 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:20:39.228720   13332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:39.228752   13332 out.go:374] Setting ErrFile to fd 2...
	I1213 18:20:39.228773   13332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:39.229079   13332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:20:39.229384   13332 mustload.go:66] Loading cluster: addons-377325
	I1213 18:20:39.229808   13332 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:39.229853   13332 addons.go:622] checking whether the cluster is paused
	I1213 18:20:39.229983   13332 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:39.230017   13332 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:20:39.230547   13332 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:20:39.250949   13332 ssh_runner.go:195] Run: systemctl --version
	I1213 18:20:39.251000   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:20:39.269178   13332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:20:39.381774   13332 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:20:39.381902   13332 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:20:39.417487   13332 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:20:39.417520   13332 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:20:39.417526   13332 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:20:39.417530   13332 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:20:39.417534   13332 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:20:39.417538   13332 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:20:39.417549   13332 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:20:39.417553   13332 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:20:39.417557   13332 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:20:39.417564   13332 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:20:39.417567   13332 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:20:39.417571   13332 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:20:39.417574   13332 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:20:39.417577   13332 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:20:39.417580   13332 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:20:39.417586   13332 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:20:39.417592   13332 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:20:39.417597   13332 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:20:39.417600   13332 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:20:39.417602   13332 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:20:39.417607   13332 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:20:39.417611   13332 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:20:39.417614   13332 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:20:39.417623   13332 cri.go:89] found id: ""
	I1213 18:20:39.417682   13332 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:20:39.432721   13332 out.go:203] 
	W1213 18:20:39.435668   13332 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:20:39.435701   13332 out.go:285] * 
	* 
	W1213 18:20:39.439884   13332 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:20:39.442961   13332 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (256.434559ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:20:39.501264   13378 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:20:39.501491   13378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:39.501506   13378 out.go:374] Setting ErrFile to fd 2...
	I1213 18:20:39.501513   13378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:39.501865   13378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:20:39.502187   13378 mustload.go:66] Loading cluster: addons-377325
	I1213 18:20:39.502606   13378 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:39.502626   13378 addons.go:622] checking whether the cluster is paused
	I1213 18:20:39.502770   13378 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:39.502788   13378 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:20:39.503404   13378 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:20:39.520302   13378 ssh_runner.go:195] Run: systemctl --version
	I1213 18:20:39.520368   13378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:20:39.539475   13378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:20:39.643512   13378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:20:39.643641   13378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:20:39.674217   13378 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:20:39.674250   13378 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:20:39.674256   13378 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:20:39.674260   13378 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:20:39.674264   13378 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:20:39.674269   13378 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:20:39.674289   13378 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:20:39.674298   13378 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:20:39.674301   13378 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:20:39.674308   13378 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:20:39.674311   13378 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:20:39.674315   13378 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:20:39.674335   13378 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:20:39.674342   13378 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:20:39.674345   13378 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:20:39.674364   13378 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:20:39.674372   13378 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:20:39.674382   13378 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:20:39.674385   13378 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:20:39.674388   13378 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:20:39.674393   13378 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:20:39.674399   13378 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:20:39.674402   13378 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:20:39.674410   13378 cri.go:89] found id: ""
	I1213 18:20:39.674462   13378 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:20:39.690134   13378 out.go:203] 
	W1213 18:20:39.692998   13378 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:20:39.693070   13378 out.go:285] * 
	* 
	W1213 18:20:39.696814   13378 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:20:39.699849   13378 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (41.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-377325 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-377325 --alsologtostderr -v=1: exit status 11 (341.636349ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:19:54.838046   11699 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:19:54.838298   11699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:19:54.838312   11699 out.go:374] Setting ErrFile to fd 2...
	I1213 18:19:54.838318   11699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:19:54.838572   11699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:19:54.838846   11699 mustload.go:66] Loading cluster: addons-377325
	I1213 18:19:54.839298   11699 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:19:54.839318   11699 addons.go:622] checking whether the cluster is paused
	I1213 18:19:54.839434   11699 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:19:54.839448   11699 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:19:54.840039   11699 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:19:54.862710   11699 ssh_runner.go:195] Run: systemctl --version
	I1213 18:19:54.862760   11699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:19:54.882612   11699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:19:55.032407   11699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:19:55.032522   11699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:19:55.064265   11699 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:19:55.064291   11699 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:19:55.064297   11699 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:19:55.064301   11699 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:19:55.064305   11699 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:19:55.064309   11699 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:19:55.064313   11699 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:19:55.064316   11699 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:19:55.064320   11699 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:19:55.064326   11699 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:19:55.064330   11699 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:19:55.064333   11699 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:19:55.064336   11699 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:19:55.064340   11699 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:19:55.064343   11699 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:19:55.064355   11699 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:19:55.064362   11699 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:19:55.064367   11699 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:19:55.064371   11699 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:19:55.064374   11699 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:19:55.064379   11699 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:19:55.064390   11699 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:19:55.064393   11699 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:19:55.064396   11699 cri.go:89] found id: ""
	I1213 18:19:55.064450   11699 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:19:55.081834   11699 out.go:203] 
	W1213 18:19:55.084873   11699 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:19:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:19:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:19:55.084899   11699 out.go:285] * 
	* 
	W1213 18:19:55.088691   11699 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:19:55.091767   11699 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-377325 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-377325
helpers_test.go:244: (dbg) docker inspect addons-377325:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e",
	        "Created": "2025-12-13T18:17:30.991623713Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 6053,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:17:31.075997651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e/hosts",
	        "LogPath": "/var/lib/docker/containers/d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e/d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e-json.log",
	        "Name": "/addons-377325",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-377325:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-377325",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1b08c8b0cba43afd7eb70b58179d249064cf2c7007d64232063258d4d30138e",
	                "LowerDir": "/var/lib/docker/overlay2/99be71c0b30ed4d376bc0a5a25800fc91dd30b6dda394c858acec718b94b33e5-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/99be71c0b30ed4d376bc0a5a25800fc91dd30b6dda394c858acec718b94b33e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/99be71c0b30ed4d376bc0a5a25800fc91dd30b6dda394c858acec718b94b33e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/99be71c0b30ed4d376bc0a5a25800fc91dd30b6dda394c858acec718b94b33e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-377325",
	                "Source": "/var/lib/docker/volumes/addons-377325/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-377325",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-377325",
	                "name.minikube.sigs.k8s.io": "addons-377325",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fed9ad655d5468e6cb16857658e8407795d260aa3c682c4e53643b51f1120c2b",
	            "SandboxKey": "/var/run/docker/netns/fed9ad655d54",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-377325": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:88:6b:c7:a6:e8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e114794b49d1e802f2fed399dde1a4b5db42d2b08d3c3681323b57e7b03fa8f",
	                    "EndpointID": "9f63b24e7e36ce34efb182fd09615d82ae13ed6d59dc4d906dbab7ad4a878e9c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-377325",
	                        "d1b08c8b0cba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-377325 -n addons-377325
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-377325 logs -n 25: (1.505852357s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-682129 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-682129   │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │ 13 Dec 25 18:16 UTC │
	│ delete  │ -p download-only-682129                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-682129   │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │ 13 Dec 25 18:16 UTC │
	│ start   │ -o=json --download-only -p download-only-380287 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-380287   │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │ 13 Dec 25 18:16 UTC │
	│ delete  │ -p download-only-380287                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-380287   │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │ 13 Dec 25 18:16 UTC │
	│ start   │ -o=json --download-only -p download-only-512620 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-512620   │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │ 13 Dec 25 18:17 UTC │
	│ delete  │ -p download-only-512620                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-512620   │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │ 13 Dec 25 18:17 UTC │
	│ delete  │ -p download-only-682129                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-682129   │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │ 13 Dec 25 18:17 UTC │
	│ delete  │ -p download-only-380287                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-380287   │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │ 13 Dec 25 18:17 UTC │
	│ delete  │ -p download-only-512620                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-512620   │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │ 13 Dec 25 18:17 UTC │
	│ start   │ --download-only -p download-docker-351651 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-351651 │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │                     │
	│ delete  │ -p download-docker-351651                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-351651 │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │ 13 Dec 25 18:17 UTC │
	│ start   │ --download-only -p binary-mirror-542781 --alsologtostderr --binary-mirror http://127.0.0.1:45875 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-542781   │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │                     │
	│ delete  │ -p binary-mirror-542781                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-542781   │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │ 13 Dec 25 18:17 UTC │
	│ addons  │ enable dashboard -p addons-377325                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │                     │
	│ addons  │ disable dashboard -p addons-377325                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │                     │
	│ start   │ -p addons-377325 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:17 UTC │ 13 Dec 25 18:19 UTC │
	│ addons  │ addons-377325 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:19 UTC │                     │
	│ addons  │ addons-377325 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:19 UTC │                     │
	│ addons  │ enable headlamp -p addons-377325 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-377325          │ jenkins │ v1.37.0 │ 13 Dec 25 18:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:17:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:17:06.165344    5650 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:17:06.165576    5650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:17:06.165606    5650 out.go:374] Setting ErrFile to fd 2...
	I1213 18:17:06.165624    5650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:17:06.165925    5650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:17:06.166479    5650 out.go:368] Setting JSON to false
	I1213 18:17:06.167502    5650 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3579,"bootTime":1765646248,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:17:06.167622    5650 start.go:143] virtualization:  
	I1213 18:17:06.171327    5650 out.go:179] * [addons-377325] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:17:06.174362    5650 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:17:06.174458    5650 notify.go:221] Checking for updates...
	I1213 18:17:06.180279    5650 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:17:06.183361    5650 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:17:06.186360    5650 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:17:06.189445    5650 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:17:06.192548    5650 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:17:06.195684    5650 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:17:06.230813    5650 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:17:06.230954    5650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:17:06.294902    5650 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 18:17:06.285111034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:17:06.295020    5650 docker.go:319] overlay module found
	I1213 18:17:06.298258    5650 out.go:179] * Using the docker driver based on user configuration
	I1213 18:17:06.301246    5650 start.go:309] selected driver: docker
	I1213 18:17:06.301270    5650 start.go:927] validating driver "docker" against <nil>
	I1213 18:17:06.301283    5650 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:17:06.302071    5650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:17:06.363083    5650 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 18:17:06.353760297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:17:06.363248    5650 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 18:17:06.363493    5650 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 18:17:06.366551    5650 out.go:179] * Using Docker driver with root privileges
	I1213 18:17:06.369396    5650 cni.go:84] Creating CNI manager for ""
	I1213 18:17:06.369477    5650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:17:06.369496    5650 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 18:17:06.369588    5650 start.go:353] cluster config:
	{Name:addons-377325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377325 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1213 18:17:06.372904    5650 out.go:179] * Starting "addons-377325" primary control-plane node in "addons-377325" cluster
	I1213 18:17:06.376053    5650 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:17:06.379211    5650 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:17:06.382247    5650 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 18:17:06.382318    5650 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 18:17:06.382334    5650 cache.go:65] Caching tarball of preloaded images
	I1213 18:17:06.382351    5650 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:17:06.382453    5650 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 18:17:06.382465    5650 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 18:17:06.382869    5650 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/config.json ...
	I1213 18:17:06.382942    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/config.json: {Name:mkaaf44029fbe14b9df08ab6a9609ef9606bb7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:06.400760    5650 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 18:17:06.400945    5650 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 18:17:06.400984    5650 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 18:17:06.400993    5650 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 18:17:06.401028    5650 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 18:17:06.401035    5650 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1213 18:17:24.282113    5650 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1213 18:17:24.282154    5650 cache.go:243] Successfully downloaded all kic artifacts
	I1213 18:17:24.282192    5650 start.go:360] acquireMachinesLock for addons-377325: {Name:mkf44ed8b66583f628999561be83d83d1e36fea0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 18:17:24.282307    5650 start.go:364] duration metric: took 91.612µs to acquireMachinesLock for "addons-377325"
	I1213 18:17:24.282337    5650 start.go:93] Provisioning new machine with config: &{Name:addons-377325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377325 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 18:17:24.282404    5650 start.go:125] createHost starting for "" (driver="docker")
	I1213 18:17:24.285997    5650 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1213 18:17:24.286246    5650 start.go:159] libmachine.API.Create for "addons-377325" (driver="docker")
	I1213 18:17:24.286288    5650 client.go:173] LocalClient.Create starting
	I1213 18:17:24.286403    5650 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem
	I1213 18:17:24.882640    5650 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem
	I1213 18:17:25.108341    5650 cli_runner.go:164] Run: docker network inspect addons-377325 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 18:17:25.124309    5650 cli_runner.go:211] docker network inspect addons-377325 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 18:17:25.124408    5650 network_create.go:284] running [docker network inspect addons-377325] to gather additional debugging logs...
	I1213 18:17:25.124431    5650 cli_runner.go:164] Run: docker network inspect addons-377325
	W1213 18:17:25.142275    5650 cli_runner.go:211] docker network inspect addons-377325 returned with exit code 1
	I1213 18:17:25.142306    5650 network_create.go:287] error running [docker network inspect addons-377325]: docker network inspect addons-377325: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-377325 not found
	I1213 18:17:25.142321    5650 network_create.go:289] output of [docker network inspect addons-377325]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-377325 not found
	
	** /stderr **
	I1213 18:17:25.142429    5650 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:17:25.159472    5650 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bbb020}
	I1213 18:17:25.159519    5650 network_create.go:124] attempt to create docker network addons-377325 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 18:17:25.159574    5650 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-377325 addons-377325
	I1213 18:17:25.217912    5650 network_create.go:108] docker network addons-377325 192.168.49.0/24 created
	I1213 18:17:25.217944    5650 kic.go:121] calculated static IP "192.168.49.2" for the "addons-377325" container
	I1213 18:17:25.218017    5650 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 18:17:25.234398    5650 cli_runner.go:164] Run: docker volume create addons-377325 --label name.minikube.sigs.k8s.io=addons-377325 --label created_by.minikube.sigs.k8s.io=true
	I1213 18:17:25.252755    5650 oci.go:103] Successfully created a docker volume addons-377325
	I1213 18:17:25.252858    5650 cli_runner.go:164] Run: docker run --rm --name addons-377325-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377325 --entrypoint /usr/bin/test -v addons-377325:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 18:17:26.903998    5650 cli_runner.go:217] Completed: docker run --rm --name addons-377325-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377325 --entrypoint /usr/bin/test -v addons-377325:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.651094506s)
	I1213 18:17:26.904029    5650 oci.go:107] Successfully prepared a docker volume addons-377325
	I1213 18:17:26.904076    5650 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 18:17:26.904095    5650 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 18:17:26.904165    5650 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-377325:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 18:17:30.919873    5650 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-377325:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.015627968s)
	I1213 18:17:30.919903    5650 kic.go:203] duration metric: took 4.015804963s to extract preloaded images to volume ...
	W1213 18:17:30.920042    5650 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 18:17:30.920158    5650 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 18:17:30.976087    5650 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-377325 --name addons-377325 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-377325 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-377325 --network addons-377325 --ip 192.168.49.2 --volume addons-377325:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 18:17:31.336454    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Running}}
	I1213 18:17:31.355757    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:31.379043    5650 cli_runner.go:164] Run: docker exec addons-377325 stat /var/lib/dpkg/alternatives/iptables
	I1213 18:17:31.433234    5650 oci.go:144] the created container "addons-377325" has a running status.
	I1213 18:17:31.433271    5650 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa...
	I1213 18:17:31.576804    5650 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 18:17:31.599774    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:31.626330    5650 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 18:17:31.626353    5650 kic_runner.go:114] Args: [docker exec --privileged addons-377325 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 18:17:31.685880    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:31.709248    5650 machine.go:94] provisionDockerMachine start ...
	I1213 18:17:31.709346    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:31.727791    5650 main.go:143] libmachine: Using SSH client type: native
	I1213 18:17:31.728174    5650 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 18:17:31.728193    5650 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 18:17:31.728897    5650 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 18:17:34.880391    5650 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-377325
	
	I1213 18:17:34.880460    5650 ubuntu.go:182] provisioning hostname "addons-377325"
	I1213 18:17:34.880536    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:34.897253    5650 main.go:143] libmachine: Using SSH client type: native
	I1213 18:17:34.897570    5650 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 18:17:34.897587    5650 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-377325 && echo "addons-377325" | sudo tee /etc/hostname
	I1213 18:17:35.055092    5650 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-377325
	
	I1213 18:17:35.055178    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:35.072733    5650 main.go:143] libmachine: Using SSH client type: native
	I1213 18:17:35.073074    5650 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 18:17:35.073091    5650 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-377325' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-377325/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-377325' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 18:17:35.225254    5650 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 18:17:35.225282    5650 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 18:17:35.225302    5650 ubuntu.go:190] setting up certificates
	I1213 18:17:35.225319    5650 provision.go:84] configureAuth start
	I1213 18:17:35.225382    5650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377325
	I1213 18:17:35.242802    5650 provision.go:143] copyHostCerts
	I1213 18:17:35.242908    5650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 18:17:35.243040    5650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 18:17:35.243103    5650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 18:17:35.243154    5650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.addons-377325 san=[127.0.0.1 192.168.49.2 addons-377325 localhost minikube]
	I1213 18:17:35.636228    5650 provision.go:177] copyRemoteCerts
	I1213 18:17:35.636295    5650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 18:17:35.636370    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:35.654416    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:17:35.756547    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 18:17:35.773517    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 18:17:35.790326    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 18:17:35.807497    5650 provision.go:87] duration metric: took 582.150808ms to configureAuth
	I1213 18:17:35.807523    5650 ubuntu.go:206] setting minikube options for container-runtime
	I1213 18:17:35.807742    5650 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:17:35.807845    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:35.824759    5650 main.go:143] libmachine: Using SSH client type: native
	I1213 18:17:35.825097    5650 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 18:17:35.825119    5650 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 18:17:36.132625    5650 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 18:17:36.132647    5650 machine.go:97] duration metric: took 4.423379072s to provisionDockerMachine
	I1213 18:17:36.132657    5650 client.go:176] duration metric: took 11.846357088s to LocalClient.Create
	I1213 18:17:36.132677    5650 start.go:167] duration metric: took 11.846433118s to libmachine.API.Create "addons-377325"
	I1213 18:17:36.132684    5650 start.go:293] postStartSetup for "addons-377325" (driver="docker")
	I1213 18:17:36.132694    5650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 18:17:36.132756    5650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 18:17:36.132803    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:36.150592    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:17:36.258125    5650 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 18:17:36.261755    5650 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 18:17:36.261781    5650 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 18:17:36.261793    5650 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 18:17:36.261870    5650 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 18:17:36.261896    5650 start.go:296] duration metric: took 129.20579ms for postStartSetup
	I1213 18:17:36.262232    5650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377325
	I1213 18:17:36.279585    5650 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/config.json ...
	I1213 18:17:36.279873    5650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 18:17:36.279915    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:36.299384    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:17:36.402546    5650 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 18:17:36.407480    5650 start.go:128] duration metric: took 12.125061927s to createHost
	I1213 18:17:36.407505    5650 start.go:83] releasing machines lock for "addons-377325", held for 12.125183544s
	I1213 18:17:36.407576    5650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-377325
	I1213 18:17:36.424845    5650 ssh_runner.go:195] Run: cat /version.json
	I1213 18:17:36.424903    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:36.425187    5650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 18:17:36.425248    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:17:36.446723    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:17:36.450815    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:17:36.549817    5650 ssh_runner.go:195] Run: systemctl --version
	I1213 18:17:36.639265    5650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 18:17:36.673379    5650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 18:17:36.677272    5650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 18:17:36.677337    5650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 18:17:36.704311    5650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 18:17:36.704330    5650 start.go:496] detecting cgroup driver to use...
	I1213 18:17:36.704360    5650 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 18:17:36.704408    5650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 18:17:36.721092    5650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 18:17:36.733313    5650 docker.go:218] disabling cri-docker service (if available) ...
	I1213 18:17:36.733377    5650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 18:17:36.750310    5650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 18:17:36.768682    5650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 18:17:36.885544    5650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 18:17:37.021437    5650 docker.go:234] disabling docker service ...
	I1213 18:17:37.021571    5650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 18:17:37.045903    5650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 18:17:37.059388    5650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 18:17:37.182843    5650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 18:17:37.303695    5650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 18:17:37.316414    5650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 18:17:37.331451    5650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 18:17:37.331540    5650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.340307    5650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 18:17:37.340410    5650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.349593    5650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.358626    5650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.367423    5650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 18:17:37.375605    5650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.384088    5650 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.397213    5650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:17:37.405902    5650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 18:17:37.413059    5650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 18:17:37.413146    5650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 18:17:37.427194    5650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 18:17:37.435304    5650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:17:37.554388    5650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 18:17:37.732225    5650 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 18:17:37.732321    5650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 18:17:37.736232    5650 start.go:564] Will wait 60s for crictl version
	I1213 18:17:37.736296    5650 ssh_runner.go:195] Run: which crictl
	I1213 18:17:37.739814    5650 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 18:17:37.765302    5650 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 18:17:37.765441    5650 ssh_runner.go:195] Run: crio --version
	I1213 18:17:37.796482    5650 ssh_runner.go:195] Run: crio --version
	I1213 18:17:37.828076    5650 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 18:17:37.831016    5650 cli_runner.go:164] Run: docker network inspect addons-377325 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:17:37.847344    5650 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 18:17:37.851072    5650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 18:17:37.861103    5650 kubeadm.go:884] updating cluster {Name:addons-377325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377325 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 18:17:37.861236    5650 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 18:17:37.861294    5650 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:17:37.893842    5650 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:17:37.893869    5650 crio.go:433] Images already preloaded, skipping extraction
	I1213 18:17:37.893925    5650 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:17:37.922239    5650 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:17:37.922264    5650 cache_images.go:86] Images are preloaded, skipping loading
	I1213 18:17:37.922272    5650 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 18:17:37.922363    5650 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-377325 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-377325 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 18:17:37.922457    5650 ssh_runner.go:195] Run: crio config
	I1213 18:17:37.992739    5650 cni.go:84] Creating CNI manager for ""
	I1213 18:17:37.992761    5650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:17:37.992782    5650 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 18:17:37.992806    5650 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-377325 NodeName:addons-377325 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 18:17:37.992933    5650 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-377325"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 18:17:37.993033    5650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 18:17:38.000948    5650 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 18:17:38.001141    5650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 18:17:38.020789    5650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 18:17:38.035845    5650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 18:17:38.050671    5650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1213 18:17:38.066196    5650 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 18:17:38.070391    5650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 18:17:38.081392    5650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:17:38.196924    5650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:17:38.212420    5650 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325 for IP: 192.168.49.2
	I1213 18:17:38.212441    5650 certs.go:195] generating shared ca certs ...
	I1213 18:17:38.212456    5650 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:38.212608    5650 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 18:17:38.579950    5650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt ...
	I1213 18:17:38.579985    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt: {Name:mk2f407ae7978a5cf334863b6824308cf93b4a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:38.580177    5650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key ...
	I1213 18:17:38.580190    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key: {Name:mkeb7ff4c2cb1968fb6d9a7cd6276eef31fcc6eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:38.580278    5650 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 18:17:38.862741    5650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt ...
	I1213 18:17:38.862775    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt: {Name:mkfa7f7e3f20875cf22ab2ed8c3cfc16a80ee9ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:38.862957    5650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key ...
	I1213 18:17:38.862970    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key: {Name:mk1446a47835ef28b2059aa5658af7fa98c57ad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:38.863060    5650 certs.go:257] generating profile certs ...
	I1213 18:17:38.863118    5650 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.key
	I1213 18:17:38.863130    5650 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt with IP's: []
	I1213 18:17:39.270584    5650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt ...
	I1213 18:17:39.270633    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: {Name:mk1ec8956285149ceef36aacbe439c65f6350ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:39.270825    5650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.key ...
	I1213 18:17:39.270838    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.key: {Name:mk72f4efafb2648fe93915168325e2b985ffd41c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:39.270925    5650 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.key.00813dc7
	I1213 18:17:39.270944    5650 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.crt.00813dc7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 18:17:39.567949    5650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.crt.00813dc7 ...
	I1213 18:17:39.567981    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.crt.00813dc7: {Name:mk6b2dd4c39675e3aa614252695fb1cf173de2a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:39.568158    5650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.key.00813dc7 ...
	I1213 18:17:39.568176    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.key.00813dc7: {Name:mkfa6b7b6d106e68e398aaea691221fae913d661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:39.568257    5650 certs.go:382] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.crt.00813dc7 -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.crt
	I1213 18:17:39.568335    5650 certs.go:386] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.key.00813dc7 -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.key
	I1213 18:17:39.568388    5650 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.key
	I1213 18:17:39.568408    5650 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.crt with IP's: []
	I1213 18:17:39.901879    5650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.crt ...
	I1213 18:17:39.901912    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.crt: {Name:mkc6c28a231ef85233e4ebd475bd379d65375db2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:39.902091    5650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.key ...
	I1213 18:17:39.902105    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.key: {Name:mk39ac3efb272791f2fd5624547a5dddc0e5658b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:39.902291    5650 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 18:17:39.902335    5650 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 18:17:39.902366    5650 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 18:17:39.902395    5650 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 18:17:39.902962    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 18:17:39.921668    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 18:17:39.940625    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 18:17:39.959239    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 18:17:39.976975    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 18:17:39.994960    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 18:17:40.019627    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 18:17:40.047405    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 18:17:40.067854    5650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 18:17:40.090614    5650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 18:17:40.104222    5650 ssh_runner.go:195] Run: openssl version
	I1213 18:17:40.111022    5650 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:17:40.119183    5650 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 18:17:40.127194    5650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:17:40.131239    5650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:17:40.131327    5650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:17:40.173161    5650 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 18:17:40.181078    5650 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 18:17:40.188767    5650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:17:40.192469    5650 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 18:17:40.192561    5650 kubeadm.go:401] StartCluster: {Name:addons-377325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-377325 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:17:40.192655    5650 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:17:40.192726    5650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:17:40.219561    5650 cri.go:89] found id: ""
	I1213 18:17:40.219629    5650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 18:17:40.227572    5650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 18:17:40.235335    5650 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 18:17:40.235442    5650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:17:40.243166    5650 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 18:17:40.243191    5650 kubeadm.go:158] found existing configuration files:
	
	I1213 18:17:40.243242    5650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 18:17:40.250903    5650 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 18:17:40.250966    5650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 18:17:40.258265    5650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 18:17:40.265924    5650 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 18:17:40.265996    5650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:17:40.273410    5650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 18:17:40.280825    5650 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 18:17:40.280891    5650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:17:40.288100    5650 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 18:17:40.295828    5650 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 18:17:40.295901    5650 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:17:40.303305    5650 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 18:17:40.367937    5650 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 18:17:40.368314    5650 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 18:17:40.436356    5650 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 18:17:55.359161    5650 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 18:17:55.359218    5650 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 18:17:55.359323    5650 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 18:17:55.359384    5650 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 18:17:55.359422    5650 kubeadm.go:319] OS: Linux
	I1213 18:17:55.359474    5650 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 18:17:55.359526    5650 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 18:17:55.359577    5650 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 18:17:55.359627    5650 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 18:17:55.359679    5650 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 18:17:55.359739    5650 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 18:17:55.359788    5650 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 18:17:55.359840    5650 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 18:17:55.359891    5650 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 18:17:55.359967    5650 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 18:17:55.360065    5650 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 18:17:55.360159    5650 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 18:17:55.360225    5650 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 18:17:55.365071    5650 out.go:252]   - Generating certificates and keys ...
	I1213 18:17:55.365198    5650 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 18:17:55.365264    5650 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 18:17:55.365343    5650 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 18:17:55.365401    5650 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 18:17:55.365463    5650 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 18:17:55.365522    5650 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 18:17:55.365595    5650 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 18:17:55.365727    5650 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-377325 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 18:17:55.365784    5650 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 18:17:55.365913    5650 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-377325 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 18:17:55.366031    5650 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 18:17:55.366101    5650 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 18:17:55.366163    5650 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 18:17:55.366230    5650 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 18:17:55.366284    5650 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 18:17:55.366340    5650 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 18:17:55.366431    5650 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 18:17:55.366517    5650 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 18:17:55.366599    5650 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 18:17:55.366707    5650 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 18:17:55.366822    5650 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 18:17:55.369620    5650 out.go:252]   - Booting up control plane ...
	I1213 18:17:55.369769    5650 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 18:17:55.369885    5650 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 18:17:55.369986    5650 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 18:17:55.370094    5650 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 18:17:55.370193    5650 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 18:17:55.370300    5650 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 18:17:55.370388    5650 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 18:17:55.370431    5650 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 18:17:55.370564    5650 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 18:17:55.370672    5650 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 18:17:55.370741    5650 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.006829363s
	I1213 18:17:55.370837    5650 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 18:17:55.370923    5650 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1213 18:17:55.371016    5650 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 18:17:55.371098    5650 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 18:17:55.371177    5650 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.491560762s
	I1213 18:17:55.371247    5650 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.84068988s
	I1213 18:17:55.371329    5650 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502190953s
	I1213 18:17:55.371438    5650 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 18:17:55.371566    5650 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 18:17:55.371628    5650 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 18:17:55.371903    5650 kubeadm.go:319] [mark-control-plane] Marking the node addons-377325 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 18:17:55.371980    5650 kubeadm.go:319] [bootstrap-token] Using token: k0r6nn.wrth4ud4rzw0uc9v
	I1213 18:17:55.376846    5650 out.go:252]   - Configuring RBAC rules ...
	I1213 18:17:55.376991    5650 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 18:17:55.377214    5650 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 18:17:55.377379    5650 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 18:17:55.377528    5650 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 18:17:55.377654    5650 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 18:17:55.377757    5650 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 18:17:55.377900    5650 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 18:17:55.377962    5650 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 18:17:55.378028    5650 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 18:17:55.378042    5650 kubeadm.go:319] 
	I1213 18:17:55.378104    5650 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 18:17:55.378116    5650 kubeadm.go:319] 
	I1213 18:17:55.378200    5650 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 18:17:55.378207    5650 kubeadm.go:319] 
	I1213 18:17:55.378237    5650 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 18:17:55.378311    5650 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 18:17:55.378378    5650 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 18:17:55.378385    5650 kubeadm.go:319] 
	I1213 18:17:55.378449    5650 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 18:17:55.378461    5650 kubeadm.go:319] 
	I1213 18:17:55.378526    5650 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 18:17:55.378541    5650 kubeadm.go:319] 
	I1213 18:17:55.378608    5650 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 18:17:55.378699    5650 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 18:17:55.378792    5650 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 18:17:55.378804    5650 kubeadm.go:319] 
	I1213 18:17:55.378928    5650 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 18:17:55.379030    5650 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 18:17:55.379038    5650 kubeadm.go:319] 
	I1213 18:17:55.379136    5650 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token k0r6nn.wrth4ud4rzw0uc9v \
	I1213 18:17:55.379270    5650 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c855727c547190fbfc8dabe20c5acea2e54aecf6fee3a83d21da995a7e3060d \
	I1213 18:17:55.379296    5650 kubeadm.go:319] 	--control-plane 
	I1213 18:17:55.379338    5650 kubeadm.go:319] 
	I1213 18:17:55.379470    5650 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 18:17:55.379494    5650 kubeadm.go:319] 
	I1213 18:17:55.379592    5650 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token k0r6nn.wrth4ud4rzw0uc9v \
	I1213 18:17:55.379741    5650 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c855727c547190fbfc8dabe20c5acea2e54aecf6fee3a83d21da995a7e3060d 
	I1213 18:17:55.379776    5650 cni.go:84] Creating CNI manager for ""
	I1213 18:17:55.379800    5650 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:17:55.384743    5650 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 18:17:55.387752    5650 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 18:17:55.391775    5650 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 18:17:55.391793    5650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 18:17:55.406052    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 18:17:55.707532    5650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 18:17:55.707717    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:55.707822    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-377325 minikube.k8s.io/updated_at=2025_12_13T18_17_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=addons-377325 minikube.k8s.io/primary=true
	I1213 18:17:55.721314    5650 ops.go:34] apiserver oom_adj: -16
	I1213 18:17:55.835054    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:56.335833    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:56.835140    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:57.336140    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:57.836106    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:58.335200    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:58.835128    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:59.335914    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:59.835140    5650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 18:17:59.985689    5650 kubeadm.go:1114] duration metric: took 4.278030452s to wait for elevateKubeSystemPrivileges
	I1213 18:17:59.985719    5650 kubeadm.go:403] duration metric: took 19.793161926s to StartCluster
	I1213 18:17:59.985737    5650 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:59.985847    5650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:17:59.986233    5650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:17:59.986409    5650 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 18:17:59.986583    5650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 18:17:59.986836    5650 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:17:59.986873    5650 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 18:17:59.986945    5650 addons.go:70] Setting yakd=true in profile "addons-377325"
	I1213 18:17:59.986960    5650 addons.go:239] Setting addon yakd=true in "addons-377325"
	I1213 18:17:59.986985    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:17:59.987474    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:59.987851    5650 addons.go:70] Setting inspektor-gadget=true in profile "addons-377325"
	I1213 18:17:59.987869    5650 addons.go:239] Setting addon inspektor-gadget=true in "addons-377325"
	I1213 18:17:59.987891    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:17:59.988309    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:59.990692    5650 addons.go:70] Setting metrics-server=true in profile "addons-377325"
	I1213 18:17:59.990725    5650 addons.go:239] Setting addon metrics-server=true in "addons-377325"
	I1213 18:17:59.990856    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:17:59.991492    5650 out.go:179] * Verifying Kubernetes components...
	I1213 18:17:59.992374    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:59.991649    5650 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-377325"
	I1213 18:17:59.993921    5650 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-377325"
	I1213 18:17:59.993956    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:17:59.994423    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:59.997152    5650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:17:59.991657    5650 addons.go:70] Setting cloud-spanner=true in profile "addons-377325"
	I1213 18:17:59.997273    5650 addons.go:239] Setting addon cloud-spanner=true in "addons-377325"
	I1213 18:17:59.997332    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:17:59.997778    5650 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-377325"
	I1213 18:17:59.997793    5650 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-377325"
	I1213 18:17:59.997813    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:17:59.998209    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:59.998558    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:17:59.991662    5650 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-377325"
	I1213 18:18:00.005116    5650 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-377325"
	I1213 18:18:00.005158    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.005624    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.018135    5650 addons.go:70] Setting registry=true in profile "addons-377325"
	I1213 18:18:00.019507    5650 addons.go:239] Setting addon registry=true in "addons-377325"
	I1213 18:17:59.991665    5650 addons.go:70] Setting default-storageclass=true in profile "addons-377325"
	I1213 18:17:59.991668    5650 addons.go:70] Setting gcp-auth=true in profile "addons-377325"
	I1213 18:17:59.991671    5650 addons.go:70] Setting ingress=true in profile "addons-377325"
	I1213 18:17:59.991674    5650 addons.go:70] Setting ingress-dns=true in profile "addons-377325"
	I1213 18:18:00.020054    5650 addons.go:239] Setting addon ingress-dns=true in "addons-377325"
	I1213 18:18:00.020905    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.021667    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.051891    5650 addons.go:70] Setting registry-creds=true in profile "addons-377325"
	I1213 18:18:00.072540    5650 addons.go:239] Setting addon registry-creds=true in "addons-377325"
	I1213 18:18:00.072609    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.073226    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.051907    5650 addons.go:70] Setting storage-provisioner=true in profile "addons-377325"
	I1213 18:18:00.075743    5650 addons.go:239] Setting addon storage-provisioner=true in "addons-377325"
	I1213 18:18:00.075818    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.051911    5650 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-377325"
	I1213 18:18:00.051915    5650 addons.go:70] Setting volcano=true in profile "addons-377325"
	I1213 18:18:00.051918    5650 addons.go:70] Setting volumesnapshots=true in profile "addons-377325"
	I1213 18:18:00.052008    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.052025    5650 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-377325"
	I1213 18:18:00.052040    5650 mustload.go:66] Loading cluster: addons-377325
	I1213 18:18:00.052060    5650 addons.go:239] Setting addon ingress=true in "addons-377325"
	I1213 18:18:00.076233    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.077744    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.082404    5650 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-377325"
	I1213 18:18:00.082922    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.091581    5650 addons.go:239] Setting addon volcano=true in "addons-377325"
	I1213 18:18:00.091718    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.092336    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.106934    5650 addons.go:239] Setting addon volumesnapshots=true in "addons-377325"
	I1213 18:18:00.107062    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.107748    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.112545    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.135574    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.162434    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.186737    5650 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 18:18:00.187371    5650 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:18:00.187663    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.261312    5650 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1213 18:18:00.269368    5650 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 18:18:00.269454    5650 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 18:18:00.269569    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.277175    5650 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 18:18:00.277200    5650 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 18:18:00.277276    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.296171    5650 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 18:18:00.305688    5650 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 18:18:00.308649    5650 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 18:18:00.308681    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 18:18:00.308784    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.309149    5650 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 18:18:00.309674    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 18:18:00.309776    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.339801    5650 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 18:18:00.344904    5650 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 18:18:00.344994    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 18:18:00.345155    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.379731    5650 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 18:18:00.405990    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 18:18:00.409624    5650 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 18:18:00.409650    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 18:18:00.409722    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.459757    5650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 18:18:00.485622    5650 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 18:18:00.485949    5650 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 18:18:00.490682    5650 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 18:18:00.493590    5650 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 18:18:00.493779    5650 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 18:18:00.493825    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 18:18:00.493955    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.501374    5650 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 18:18:00.501648    5650 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 18:18:00.501677    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 18:18:00.501802    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.512390    5650 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:18:00.512430    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 18:18:00.512496    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.518948    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 18:18:00.521188    5650 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-377325"
	I1213 18:18:00.525215    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.525760    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.546828    5650 addons.go:239] Setting addon default-storageclass=true in "addons-377325"
	I1213 18:18:00.546867    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.547294    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:00.561848    5650 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 18:18:00.569598    5650 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 18:18:00.569628    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 18:18:00.569702    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	W1213 18:18:00.578275    5650 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 18:18:00.578786    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.580456    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 18:18:00.580579    5650 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 18:18:00.595572    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 18:18:00.601258    5650 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 18:18:00.601469    5650 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 18:18:00.601483    5650 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 18:18:00.601557    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.604558    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:00.607282    5650 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 18:18:00.607377    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 18:18:00.607470    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.629341    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 18:18:00.633144    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 18:18:00.635764    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 18:18:00.640405    5650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:18:00.643680    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.644657    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 18:18:00.651220    5650 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 18:18:00.654219    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 18:18:00.654284    5650 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 18:18:00.654387    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.676747    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.684638    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.705200    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.710806    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.742374    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.760177    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.762398    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.772851    5650 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 18:18:00.772872    5650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 18:18:00.772932    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:00.816531    5650 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 18:18:00.820754    5650 out.go:179]   - Using image docker.io/busybox:stable
	I1213 18:18:00.821106    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.823426    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.831204    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.834842    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.837323    5650 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 18:18:00.837342    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 18:18:00.837407    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	W1213 18:18:00.865282    5650 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 18:18:00.865315    5650 retry.go:31] will retry after 236.13086ms: ssh: handshake failed: EOF
	I1213 18:18:00.867785    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:00.886313    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:01.394703    5650 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 18:18:01.394723    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 18:18:01.455726    5650 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 18:18:01.455746    5650 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 18:18:01.500240    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 18:18:01.505405    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 18:18:01.544018    5650 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 18:18:01.544122    5650 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 18:18:01.544600    5650 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 18:18:01.544646    5650 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 18:18:01.599293    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:18:01.644504    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 18:18:01.651493    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:18:01.653610    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 18:18:01.655581    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 18:18:01.657965    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 18:18:01.670574    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 18:18:01.674852    5650 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 18:18:01.674927    5650 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 18:18:01.694810    5650 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 18:18:01.694888    5650 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 18:18:01.724080    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 18:18:01.739626    5650 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 18:18:01.739696    5650 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 18:18:01.779412    5650 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 18:18:01.779482    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 18:18:01.790320    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 18:18:01.790394    5650 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 18:18:01.944793    5650 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 18:18:01.944872    5650 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 18:18:01.982425    5650 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 18:18:01.982507    5650 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 18:18:01.986975    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 18:18:01.987059    5650 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 18:18:02.023069    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 18:18:02.089439    5650 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 18:18:02.089517    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 18:18:02.094666    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 18:18:02.174983    5650 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 18:18:02.175059    5650 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 18:18:02.197072    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 18:18:02.197151    5650 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 18:18:02.205214    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 18:18:02.361450    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 18:18:02.361474    5650 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 18:18:02.405576    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 18:18:02.405599    5650 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 18:18:02.437765    5650 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.977962945s)
	I1213 18:18:02.437795    5650 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 18:18:02.437867    5650 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.797441192s)
	I1213 18:18:02.438604    5650 node_ready.go:35] waiting up to 6m0s for node "addons-377325" to be "Ready" ...
	I1213 18:18:02.710953    5650 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 18:18:02.711024    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 18:18:02.714190    5650 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 18:18:02.714268    5650 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 18:18:02.942976    5650 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-377325" context rescaled to 1 replicas
	I1213 18:18:02.944800    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 18:18:03.145551    5650 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 18:18:03.145622    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 18:18:03.374753    5650 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 18:18:03.374774    5650 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 18:18:03.560305    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.059983314s)
	I1213 18:18:03.560419    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.054952097s)
	I1213 18:18:03.560500    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.961128054s)
	I1213 18:18:03.560754    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.916178883s)
	I1213 18:18:03.573952    5650 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 18:18:03.574024    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 18:18:03.678806    5650 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 18:18:03.678831    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 18:18:03.756211    5650 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 18:18:03.756237    5650 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 18:18:03.830078    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1213 18:18:04.442752    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:05.558374    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.906799654s)
	I1213 18:18:05.558481    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.904805315s)
	I1213 18:18:05.602501    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.946832161s)
	W1213 18:18:06.455063    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:06.470501    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.812448043s)
	I1213 18:18:06.470535    5650 addons.go:495] Verifying addon ingress=true in "addons-377325"
	I1213 18:18:06.470690    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.80004404s)
	I1213 18:18:06.470868    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.746707737s)
	I1213 18:18:06.470949    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.447804056s)
	I1213 18:18:06.470962    5650 addons.go:495] Verifying addon metrics-server=true in "addons-377325"
	I1213 18:18:06.470990    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.37625586s)
	I1213 18:18:06.471004    5650 addons.go:495] Verifying addon registry=true in "addons-377325"
	I1213 18:18:06.471438    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.266145411s)
	I1213 18:18:06.471718    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.526841788s)
	W1213 18:18:06.471747    5650 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 18:18:06.471763    5650 retry.go:31] will retry after 150.352069ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 18:18:06.473701    5650 out.go:179] * Verifying ingress addon...
	I1213 18:18:06.475862    5650 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-377325 service yakd-dashboard -n yakd-dashboard
	
	I1213 18:18:06.475891    5650 out.go:179] * Verifying registry addon...
	I1213 18:18:06.478883    5650 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 18:18:06.480712    5650 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 18:18:06.489604    5650 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 18:18:06.489626    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:06.489973    5650 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 18:18:06.489986    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:06.622975    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 18:18:06.762390    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.932254781s)
	I1213 18:18:06.762425    5650 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-377325"
	I1213 18:18:06.765240    5650 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 18:18:06.769732    5650 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 18:18:06.776082    5650 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 18:18:06.776105    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:06.982646    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:06.984917    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:07.273776    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:07.483271    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:07.483420    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:07.773093    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:07.982768    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:07.984579    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:08.235287    5650 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 18:18:08.235387    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:08.254697    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:08.273638    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:08.370130    5650 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 18:18:08.382559    5650 addons.go:239] Setting addon gcp-auth=true in "addons-377325"
	I1213 18:18:08.382647    5650 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:18:08.383130    5650 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:18:08.400656    5650 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 18:18:08.400723    5650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:18:08.417807    5650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:18:08.482035    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:08.484113    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:08.772552    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:08.941667    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:08.982630    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:08.983898    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:09.274388    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:09.437882    5650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.814860118s)
	I1213 18:18:09.437970    5650 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.037295976s)
	I1213 18:18:09.441454    5650 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 18:18:09.444456    5650 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 18:18:09.447348    5650 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 18:18:09.447387    5650 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 18:18:09.460125    5650 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 18:18:09.460148    5650 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 18:18:09.472713    5650 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 18:18:09.472735    5650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 18:18:09.484967    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:09.486068    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:09.488990    5650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 18:18:09.773739    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:09.996180    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:10.005191    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:10.022971    5650 addons.go:495] Verifying addon gcp-auth=true in "addons-377325"
	I1213 18:18:10.026492    5650 out.go:179] * Verifying gcp-auth addon...
	I1213 18:18:10.033796    5650 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 18:18:10.042784    5650 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 18:18:10.042813    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:10.273229    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:10.481846    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:10.483731    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:10.536605    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:10.773662    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:10.982581    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:10.983953    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:11.043732    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:11.273031    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:11.442028    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:11.482236    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:11.484312    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:11.537120    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:11.773316    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:11.981969    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:11.984164    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:12.037002    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:12.273049    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:12.483390    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:12.483704    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:12.537246    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:12.773216    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:12.982494    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:12.984459    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:13.037283    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:13.273286    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:13.442155    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:13.484173    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:13.484583    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:13.537047    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:13.774086    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:13.983657    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:13.983800    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:14.036594    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:14.273475    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:14.482065    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:14.484001    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:14.537100    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:14.772718    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:14.982536    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:14.983030    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:15.042511    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:15.272966    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:15.482513    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:15.484130    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:15.537076    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:15.773650    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:15.941182    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:15.982370    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:15.984201    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:16.037357    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:16.272258    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:16.481967    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:16.484257    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:16.536968    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:16.772731    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:16.982435    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:16.983507    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:17.037250    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:17.273381    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:17.481824    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:17.483772    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:17.536576    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:17.773647    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:17.941266    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:17.982067    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:17.983850    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:18.036623    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:18.272574    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:18.482399    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:18.483376    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:18.537052    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:18.773327    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:18.982586    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:18.983752    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:19.037254    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:19.273865    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:19.481793    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:19.483701    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:19.537377    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:19.774004    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:19.941687    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:19.982601    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:19.983562    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:20.037981    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:20.273082    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:20.482135    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:20.484015    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:20.537206    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:20.773337    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:20.982448    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:20.983216    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:21.036930    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:21.273113    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:21.481887    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:21.484301    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:21.537304    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:21.773707    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:21.982591    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:21.983508    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:22.037608    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:22.272357    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:22.442237    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:22.482488    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:22.484528    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:22.537279    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:22.772963    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:22.982028    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:22.984003    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:23.036805    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:23.272935    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:23.483332    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:23.484579    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:23.537665    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:23.774161    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:23.981811    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:23.983865    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:24.036560    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:24.273876    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:24.482213    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:24.483354    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:24.537615    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:24.772463    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:24.942037    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:24.982161    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:24.984268    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:25.037070    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:25.273077    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:25.481947    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:25.484695    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:25.536589    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:25.772937    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:25.982092    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:25.983762    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:26.036491    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:26.272499    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:26.482002    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:26.483590    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:26.537403    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:26.773209    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:26.942117    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:26.984498    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:26.988410    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:27.038499    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:27.272489    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:27.482051    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:27.483809    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:27.536513    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:27.772860    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:27.982739    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:27.983871    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:28.037056    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:28.273290    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:28.481800    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:28.484103    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:28.536823    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:28.772802    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:28.982379    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:28.983515    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:29.037193    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:29.273233    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:29.441771    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:29.481699    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:29.483508    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:29.537316    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:29.773659    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:29.982815    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:29.983301    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:30.037753    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:30.273435    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:30.482131    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:30.484537    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:30.537419    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:30.773347    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:30.982272    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:30.984021    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:31.036887    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:31.272984    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:31.441828    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:31.481645    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:31.483589    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:31.537634    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:31.772520    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:31.982545    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:31.991300    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:32.036596    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:32.272511    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:32.482027    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:32.484007    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:32.536725    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:32.772903    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:32.982978    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:32.983401    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:33.037460    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:33.273388    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:33.442425    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:33.482868    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:33.484647    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:33.537574    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:33.772535    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:33.982974    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:33.984349    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:34.037594    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:34.272530    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:34.483902    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:34.484162    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:34.536749    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:34.772432    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:34.981799    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:34.983921    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:35.036591    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:35.273944    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:35.483304    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:35.483437    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:35.537063    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:35.772905    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:35.941910    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:35.982415    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:35.984191    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:36.037126    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:36.272972    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:36.483301    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:36.483945    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:36.536736    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:36.772765    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:36.983922    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:36.985379    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:37.037614    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:37.272572    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:37.482720    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:37.484004    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:37.536544    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:37.773590    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:37.982101    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:37.985257    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:38.037364    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:38.273286    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:38.442335    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:38.482244    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:38.484520    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:38.537240    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:38.773640    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:38.983550    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:38.986108    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:39.037153    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:39.273497    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:39.482395    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:39.484426    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:39.537334    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:39.773413    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:39.982485    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:39.984653    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:40.041846    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:40.273457    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:40.482492    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:40.483870    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:40.536712    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:40.773440    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 18:18:40.941880    5650 node_ready.go:57] node "addons-377325" has "Ready":"False" status (will retry)
	I1213 18:18:40.981992    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:40.984034    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:41.036596    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:41.272968    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:41.483300    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:41.483798    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:41.536904    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:41.779924    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:41.973434    5650 node_ready.go:49] node "addons-377325" is "Ready"
	I1213 18:18:41.973516    5650 node_ready.go:38] duration metric: took 39.534877573s for node "addons-377325" to be "Ready" ...
	I1213 18:18:41.973543    5650 api_server.go:52] waiting for apiserver process to appear ...
	I1213 18:18:41.973630    5650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:18:41.994324    5650 api_server.go:72] duration metric: took 42.007888474s to wait for apiserver process to appear ...
	I1213 18:18:41.994400    5650 api_server.go:88] waiting for apiserver healthz status ...
	I1213 18:18:41.994433    5650 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 18:18:42.007246    5650 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 18:18:42.010467    5650 api_server.go:141] control plane version: v1.34.2
	I1213 18:18:42.010501    5650 api_server.go:131] duration metric: took 16.08007ms to wait for apiserver health ...
	I1213 18:18:42.010512    5650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 18:18:42.010916    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:42.017884    5650 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 18:18:42.017980    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:42.030928    5650 system_pods.go:59] 19 kube-system pods found
	I1213 18:18:42.031022    5650 system_pods.go:61] "coredns-66bc5c9577-6ct6w" [c6b2d853-3212-44d5-9a75-06889a4d9dfd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 18:18:42.031044    5650 system_pods.go:61] "csi-hostpath-attacher-0" [615f4b0a-9214-4b1d-82ce-6aa31f437ac8] Pending
	I1213 18:18:42.031066    5650 system_pods.go:61] "csi-hostpath-resizer-0" [b4d59a0b-7aee-4f55-87e5-2d3348509418] Pending
	I1213 18:18:42.031096    5650 system_pods.go:61] "csi-hostpathplugin-rlkjk" [59f61c6c-d034-49db-9bda-0afcdfb3e18b] Pending
	I1213 18:18:42.031123    5650 system_pods.go:61] "etcd-addons-377325" [d647e242-ca7d-448b-818e-6dc5efeaa694] Running
	I1213 18:18:42.031146    5650 system_pods.go:61] "kindnet-rtw78" [8e27fa8a-3f82-452a-b22c-b8a04db740b0] Running
	I1213 18:18:42.031179    5650 system_pods.go:61] "kube-apiserver-addons-377325" [5c61ee91-168f-4aad-b57d-39f41f5cb7f0] Running
	I1213 18:18:42.031199    5650 system_pods.go:61] "kube-controller-manager-addons-377325" [9d75c46d-b406-4fdc-bceb-92fec5da3b5c] Running
	I1213 18:18:42.031225    5650 system_pods.go:61] "kube-ingress-dns-minikube" [340a94e2-d09a-452b-99fd-0ac69b9d39dc] Pending
	I1213 18:18:42.031259    5650 system_pods.go:61] "kube-proxy-m8qkk" [850ee62f-39ba-438b-a2a9-88d3ac38d253] Running
	I1213 18:18:42.031281    5650 system_pods.go:61] "kube-scheduler-addons-377325" [ef2e124c-ca44-4dfc-954b-d57337637342] Running
	I1213 18:18:42.031302    5650 system_pods.go:61] "metrics-server-85b7d694d7-xj9z5" [16a6665b-52ee-4f79-9a95-e9367d750ab1] Pending
	I1213 18:18:42.031345    5650 system_pods.go:61] "nvidia-device-plugin-daemonset-qfgpv" [0270c6b1-ee5d-4441-ae6f-18e3e0423c29] Pending
	I1213 18:18:42.031364    5650 system_pods.go:61] "registry-6b586f9694-b6lxz" [e23f899f-6b28-4f63-adbd-2adb36c8f008] Pending
	I1213 18:18:42.031386    5650 system_pods.go:61] "registry-creds-764b6fb674-f9qf2" [e714a411-3862-4ffa-a880-421fa8708466] Pending
	I1213 18:18:42.031425    5650 system_pods.go:61] "registry-proxy-zxcm2" [e19a41e7-ad9e-4d36-8a5b-cc0fea51183a] Pending
	I1213 18:18:42.031458    5650 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4ddpz" [87a0764e-abbb-468b-b2e5-b23a5e3eeae7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:42.031478    5650 system_pods.go:61] "snapshot-controller-7d9fbc56b8-sl9gg" [3b55e0b4-6d97-445d-9a0f-24d031fdf6a8] Pending
	I1213 18:18:42.031515    5650 system_pods.go:61] "storage-provisioner" [034f2b21-7609-45db-a977-8ec33924ac6b] Pending
	I1213 18:18:42.031536    5650 system_pods.go:74] duration metric: took 21.016721ms to wait for pod list to return data ...
	I1213 18:18:42.031559    5650 default_sa.go:34] waiting for default service account to be created ...
	I1213 18:18:42.119193    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:42.124544    5650 default_sa.go:45] found service account: "default"
	I1213 18:18:42.124626    5650 default_sa.go:55] duration metric: took 93.026726ms for default service account to be created ...
	I1213 18:18:42.124661    5650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 18:18:42.178618    5650 system_pods.go:86] 19 kube-system pods found
	I1213 18:18:42.178673    5650 system_pods.go:89] "coredns-66bc5c9577-6ct6w" [c6b2d853-3212-44d5-9a75-06889a4d9dfd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 18:18:42.178685    5650 system_pods.go:89] "csi-hostpath-attacher-0" [615f4b0a-9214-4b1d-82ce-6aa31f437ac8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 18:18:42.178733    5650 system_pods.go:89] "csi-hostpath-resizer-0" [b4d59a0b-7aee-4f55-87e5-2d3348509418] Pending
	I1213 18:18:42.178746    5650 system_pods.go:89] "csi-hostpathplugin-rlkjk" [59f61c6c-d034-49db-9bda-0afcdfb3e18b] Pending
	I1213 18:18:42.178751    5650 system_pods.go:89] "etcd-addons-377325" [d647e242-ca7d-448b-818e-6dc5efeaa694] Running
	I1213 18:18:42.178756    5650 system_pods.go:89] "kindnet-rtw78" [8e27fa8a-3f82-452a-b22c-b8a04db740b0] Running
	I1213 18:18:42.178768    5650 system_pods.go:89] "kube-apiserver-addons-377325" [5c61ee91-168f-4aad-b57d-39f41f5cb7f0] Running
	I1213 18:18:42.178772    5650 system_pods.go:89] "kube-controller-manager-addons-377325" [9d75c46d-b406-4fdc-bceb-92fec5da3b5c] Running
	I1213 18:18:42.178777    5650 system_pods.go:89] "kube-ingress-dns-minikube" [340a94e2-d09a-452b-99fd-0ac69b9d39dc] Pending
	I1213 18:18:42.178796    5650 system_pods.go:89] "kube-proxy-m8qkk" [850ee62f-39ba-438b-a2a9-88d3ac38d253] Running
	I1213 18:18:42.178805    5650 system_pods.go:89] "kube-scheduler-addons-377325" [ef2e124c-ca44-4dfc-954b-d57337637342] Running
	I1213 18:18:42.178810    5650 system_pods.go:89] "metrics-server-85b7d694d7-xj9z5" [16a6665b-52ee-4f79-9a95-e9367d750ab1] Pending
	I1213 18:18:42.178825    5650 system_pods.go:89] "nvidia-device-plugin-daemonset-qfgpv" [0270c6b1-ee5d-4441-ae6f-18e3e0423c29] Pending
	I1213 18:18:42.178838    5650 system_pods.go:89] "registry-6b586f9694-b6lxz" [e23f899f-6b28-4f63-adbd-2adb36c8f008] Pending
	I1213 18:18:42.178844    5650 system_pods.go:89] "registry-creds-764b6fb674-f9qf2" [e714a411-3862-4ffa-a880-421fa8708466] Pending
	I1213 18:18:42.178855    5650 system_pods.go:89] "registry-proxy-zxcm2" [e19a41e7-ad9e-4d36-8a5b-cc0fea51183a] Pending
	I1213 18:18:42.178861    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ddpz" [87a0764e-abbb-468b-b2e5-b23a5e3eeae7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:42.178866    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sl9gg" [3b55e0b4-6d97-445d-9a0f-24d031fdf6a8] Pending
	I1213 18:18:42.178880    5650 system_pods.go:89] "storage-provisioner" [034f2b21-7609-45db-a977-8ec33924ac6b] Pending
	I1213 18:18:42.178907    5650 retry.go:31] will retry after 287.551206ms: missing components: kube-dns
	I1213 18:18:42.276046    5650 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 18:18:42.276070    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:42.481599    5650 system_pods.go:86] 19 kube-system pods found
	I1213 18:18:42.481641    5650 system_pods.go:89] "coredns-66bc5c9577-6ct6w" [c6b2d853-3212-44d5-9a75-06889a4d9dfd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 18:18:42.481690    5650 system_pods.go:89] "csi-hostpath-attacher-0" [615f4b0a-9214-4b1d-82ce-6aa31f437ac8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 18:18:42.481712    5650 system_pods.go:89] "csi-hostpath-resizer-0" [b4d59a0b-7aee-4f55-87e5-2d3348509418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 18:18:42.481733    5650 system_pods.go:89] "csi-hostpathplugin-rlkjk" [59f61c6c-d034-49db-9bda-0afcdfb3e18b] Pending
	I1213 18:18:42.481756    5650 system_pods.go:89] "etcd-addons-377325" [d647e242-ca7d-448b-818e-6dc5efeaa694] Running
	I1213 18:18:42.481768    5650 system_pods.go:89] "kindnet-rtw78" [8e27fa8a-3f82-452a-b22c-b8a04db740b0] Running
	I1213 18:18:42.481772    5650 system_pods.go:89] "kube-apiserver-addons-377325" [5c61ee91-168f-4aad-b57d-39f41f5cb7f0] Running
	I1213 18:18:42.481777    5650 system_pods.go:89] "kube-controller-manager-addons-377325" [9d75c46d-b406-4fdc-bceb-92fec5da3b5c] Running
	I1213 18:18:42.481802    5650 system_pods.go:89] "kube-ingress-dns-minikube" [340a94e2-d09a-452b-99fd-0ac69b9d39dc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 18:18:42.481808    5650 system_pods.go:89] "kube-proxy-m8qkk" [850ee62f-39ba-438b-a2a9-88d3ac38d253] Running
	I1213 18:18:42.481820    5650 system_pods.go:89] "kube-scheduler-addons-377325" [ef2e124c-ca44-4dfc-954b-d57337637342] Running
	I1213 18:18:42.481824    5650 system_pods.go:89] "metrics-server-85b7d694d7-xj9z5" [16a6665b-52ee-4f79-9a95-e9367d750ab1] Pending
	I1213 18:18:42.481832    5650 system_pods.go:89] "nvidia-device-plugin-daemonset-qfgpv" [0270c6b1-ee5d-4441-ae6f-18e3e0423c29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 18:18:42.481847    5650 system_pods.go:89] "registry-6b586f9694-b6lxz" [e23f899f-6b28-4f63-adbd-2adb36c8f008] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 18:18:42.481852    5650 system_pods.go:89] "registry-creds-764b6fb674-f9qf2" [e714a411-3862-4ffa-a880-421fa8708466] Pending
	I1213 18:18:42.481892    5650 system_pods.go:89] "registry-proxy-zxcm2" [e19a41e7-ad9e-4d36-8a5b-cc0fea51183a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 18:18:42.481909    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ddpz" [87a0764e-abbb-468b-b2e5-b23a5e3eeae7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:42.481918    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sl9gg" [3b55e0b4-6d97-445d-9a0f-24d031fdf6a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:42.481928    5650 system_pods.go:89] "storage-provisioner" [034f2b21-7609-45db-a977-8ec33924ac6b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 18:18:42.481943    5650 retry.go:31] will retry after 385.776543ms: missing components: kube-dns
	I1213 18:18:42.487148    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:42.488119    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:42.542577    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:42.774956    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:42.886745    5650 system_pods.go:86] 19 kube-system pods found
	I1213 18:18:42.886786    5650 system_pods.go:89] "coredns-66bc5c9577-6ct6w" [c6b2d853-3212-44d5-9a75-06889a4d9dfd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 18:18:42.886823    5650 system_pods.go:89] "csi-hostpath-attacher-0" [615f4b0a-9214-4b1d-82ce-6aa31f437ac8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 18:18:42.886840    5650 system_pods.go:89] "csi-hostpath-resizer-0" [b4d59a0b-7aee-4f55-87e5-2d3348509418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 18:18:42.886849    5650 system_pods.go:89] "csi-hostpathplugin-rlkjk" [59f61c6c-d034-49db-9bda-0afcdfb3e18b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 18:18:42.886860    5650 system_pods.go:89] "etcd-addons-377325" [d647e242-ca7d-448b-818e-6dc5efeaa694] Running
	I1213 18:18:42.886866    5650 system_pods.go:89] "kindnet-rtw78" [8e27fa8a-3f82-452a-b22c-b8a04db740b0] Running
	I1213 18:18:42.886872    5650 system_pods.go:89] "kube-apiserver-addons-377325" [5c61ee91-168f-4aad-b57d-39f41f5cb7f0] Running
	I1213 18:18:42.886894    5650 system_pods.go:89] "kube-controller-manager-addons-377325" [9d75c46d-b406-4fdc-bceb-92fec5da3b5c] Running
	I1213 18:18:42.886914    5650 system_pods.go:89] "kube-ingress-dns-minikube" [340a94e2-d09a-452b-99fd-0ac69b9d39dc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 18:18:42.886925    5650 system_pods.go:89] "kube-proxy-m8qkk" [850ee62f-39ba-438b-a2a9-88d3ac38d253] Running
	I1213 18:18:42.886931    5650 system_pods.go:89] "kube-scheduler-addons-377325" [ef2e124c-ca44-4dfc-954b-d57337637342] Running
	I1213 18:18:42.886937    5650 system_pods.go:89] "metrics-server-85b7d694d7-xj9z5" [16a6665b-52ee-4f79-9a95-e9367d750ab1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 18:18:42.886949    5650 system_pods.go:89] "nvidia-device-plugin-daemonset-qfgpv" [0270c6b1-ee5d-4441-ae6f-18e3e0423c29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 18:18:42.886955    5650 system_pods.go:89] "registry-6b586f9694-b6lxz" [e23f899f-6b28-4f63-adbd-2adb36c8f008] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 18:18:42.886961    5650 system_pods.go:89] "registry-creds-764b6fb674-f9qf2" [e714a411-3862-4ffa-a880-421fa8708466] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 18:18:42.886972    5650 system_pods.go:89] "registry-proxy-zxcm2" [e19a41e7-ad9e-4d36-8a5b-cc0fea51183a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 18:18:42.886993    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ddpz" [87a0764e-abbb-468b-b2e5-b23a5e3eeae7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:42.887010    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sl9gg" [3b55e0b4-6d97-445d-9a0f-24d031fdf6a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:42.887017    5650 system_pods.go:89] "storage-provisioner" [034f2b21-7609-45db-a977-8ec33924ac6b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 18:18:42.887037    5650 retry.go:31] will retry after 471.336241ms: missing components: kube-dns
	I1213 18:18:42.993256    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:43.001274    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:43.065337    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:43.274305    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:43.362631    5650 system_pods.go:86] 19 kube-system pods found
	I1213 18:18:43.362668    5650 system_pods.go:89] "coredns-66bc5c9577-6ct6w" [c6b2d853-3212-44d5-9a75-06889a4d9dfd] Running
	I1213 18:18:43.362679    5650 system_pods.go:89] "csi-hostpath-attacher-0" [615f4b0a-9214-4b1d-82ce-6aa31f437ac8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 18:18:43.362722    5650 system_pods.go:89] "csi-hostpath-resizer-0" [b4d59a0b-7aee-4f55-87e5-2d3348509418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 18:18:43.362740    5650 system_pods.go:89] "csi-hostpathplugin-rlkjk" [59f61c6c-d034-49db-9bda-0afcdfb3e18b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 18:18:43.362745    5650 system_pods.go:89] "etcd-addons-377325" [d647e242-ca7d-448b-818e-6dc5efeaa694] Running
	I1213 18:18:43.362750    5650 system_pods.go:89] "kindnet-rtw78" [8e27fa8a-3f82-452a-b22c-b8a04db740b0] Running
	I1213 18:18:43.362755    5650 system_pods.go:89] "kube-apiserver-addons-377325" [5c61ee91-168f-4aad-b57d-39f41f5cb7f0] Running
	I1213 18:18:43.362768    5650 system_pods.go:89] "kube-controller-manager-addons-377325" [9d75c46d-b406-4fdc-bceb-92fec5da3b5c] Running
	I1213 18:18:43.362790    5650 system_pods.go:89] "kube-ingress-dns-minikube" [340a94e2-d09a-452b-99fd-0ac69b9d39dc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 18:18:43.362801    5650 system_pods.go:89] "kube-proxy-m8qkk" [850ee62f-39ba-438b-a2a9-88d3ac38d253] Running
	I1213 18:18:43.362822    5650 system_pods.go:89] "kube-scheduler-addons-377325" [ef2e124c-ca44-4dfc-954b-d57337637342] Running
	I1213 18:18:43.362836    5650 system_pods.go:89] "metrics-server-85b7d694d7-xj9z5" [16a6665b-52ee-4f79-9a95-e9367d750ab1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 18:18:43.362842    5650 system_pods.go:89] "nvidia-device-plugin-daemonset-qfgpv" [0270c6b1-ee5d-4441-ae6f-18e3e0423c29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 18:18:43.362851    5650 system_pods.go:89] "registry-6b586f9694-b6lxz" [e23f899f-6b28-4f63-adbd-2adb36c8f008] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 18:18:43.362862    5650 system_pods.go:89] "registry-creds-764b6fb674-f9qf2" [e714a411-3862-4ffa-a880-421fa8708466] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 18:18:43.362871    5650 system_pods.go:89] "registry-proxy-zxcm2" [e19a41e7-ad9e-4d36-8a5b-cc0fea51183a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 18:18:43.362878    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4ddpz" [87a0764e-abbb-468b-b2e5-b23a5e3eeae7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:43.362914    5650 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sl9gg" [3b55e0b4-6d97-445d-9a0f-24d031fdf6a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 18:18:43.362928    5650 system_pods.go:89] "storage-provisioner" [034f2b21-7609-45db-a977-8ec33924ac6b] Running
	I1213 18:18:43.362941    5650 system_pods.go:126] duration metric: took 1.238259826s to wait for k8s-apps to be running ...
	I1213 18:18:43.362955    5650 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 18:18:43.363025    5650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 18:18:43.380862    5650 system_svc.go:56] duration metric: took 17.885786ms WaitForService to wait for kubelet
	I1213 18:18:43.380939    5650 kubeadm.go:587] duration metric: took 43.394507038s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 18:18:43.380972    5650 node_conditions.go:102] verifying NodePressure condition ...
	I1213 18:18:43.384344    5650 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 18:18:43.384424    5650 node_conditions.go:123] node cpu capacity is 2
	I1213 18:18:43.384452    5650 node_conditions.go:105] duration metric: took 3.460941ms to run NodePressure ...
	I1213 18:18:43.384479    5650 start.go:242] waiting for startup goroutines ...
	I1213 18:18:43.485469    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:43.485972    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:43.585651    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:43.774862    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:43.982846    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:43.985449    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:44.042933    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:44.273313    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:44.483739    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:44.483941    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:44.537088    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:44.773459    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:44.983783    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:44.984548    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:45.040364    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:45.291213    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:45.482849    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:45.489360    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:45.537153    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:45.774153    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:45.983646    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:45.984117    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:46.037045    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:46.273734    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:46.484083    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:46.484726    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:46.538004    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:46.774383    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:46.983698    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:46.985776    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:47.037136    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:47.273690    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:47.484456    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:47.484733    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:47.536675    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:47.772995    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:47.982429    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:47.984727    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:48.037663    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:48.273321    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:48.482664    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:48.484341    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:48.538247    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:48.773788    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:48.981804    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:48.984179    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:49.037957    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:49.273601    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:49.483655    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:49.484258    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:49.537235    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:49.774066    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:49.982052    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:49.983921    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:50.038802    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:50.274556    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:50.488203    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:50.488329    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:50.537073    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:50.773512    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:50.982243    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:50.984450    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:51.037646    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:51.272845    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:51.483456    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:51.486557    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:51.538465    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:51.774232    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:51.982135    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:51.984533    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:52.037926    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:52.273782    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:52.482840    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:52.485133    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:52.537516    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:52.773281    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:52.982329    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:52.984414    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:53.037763    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:53.272923    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:53.485219    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:53.485588    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:53.537296    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:53.774838    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:53.985224    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:53.985711    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:54.037686    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:54.274236    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:54.482767    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:54.485774    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:54.536972    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:54.774019    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:54.985647    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:54.985785    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:55.043159    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:55.274382    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:55.483938    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:55.484944    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:55.537075    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:55.773548    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:55.984114    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:55.984284    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:56.037381    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:56.274144    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:56.484401    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:56.484571    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:56.537734    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:56.774290    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:56.984231    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:56.984405    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:57.037286    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:57.273431    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:57.482816    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:57.486840    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:57.537173    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:57.774789    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:57.983456    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:57.984784    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:58.036581    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:58.272792    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:58.483274    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:58.484786    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:58.537718    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:58.778219    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:58.986905    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:58.987189    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:59.086575    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:59.282453    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:59.485562    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:59.485909    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:18:59.537433    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:18:59.774764    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:18:59.984947    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:18:59.985989    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:00.040715    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:00.319929    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:00.482996    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:00.485542    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:00.537971    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:00.783025    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:00.985238    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:00.985350    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:01.037242    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:01.275106    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:01.485703    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:01.486064    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:01.537328    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:01.779420    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:01.985972    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:01.988689    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:02.039439    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:02.274626    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:02.483580    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:02.485042    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:02.536957    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:02.773575    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:02.982954    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:02.984368    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:03.037360    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:03.274306    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:03.483019    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:03.486487    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:03.537832    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:03.774518    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:03.983189    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:03.984205    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:04.037052    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:04.273938    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:04.483386    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:04.485678    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:04.537915    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:04.773647    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:04.983958    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:04.984298    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:05.037566    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:05.274765    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:05.485218    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:05.486039    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:05.536993    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:05.775477    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:05.985454    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:05.985619    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:06.037679    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:06.273210    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:06.483721    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:06.485259    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:06.537599    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:06.774290    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:06.985066    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:06.985396    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:07.037432    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:07.274203    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:07.483857    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:07.485096    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:07.537180    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:07.773626    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:07.984399    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:07.984533    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:08.037707    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:08.273161    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:08.482942    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:08.485787    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:08.537907    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:08.773721    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:08.983202    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:08.984250    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:09.037050    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:09.273821    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:09.482483    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:09.484299    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:09.537547    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:09.774080    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:09.984214    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:09.984672    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:10.037214    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:10.273398    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:10.483839    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:10.484819    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:10.537199    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:10.773932    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:10.982913    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:10.984800    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:11.037944    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:11.273992    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:11.482744    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:11.484437    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:11.537643    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:11.773223    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:11.983816    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:11.984414    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:12.037292    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:12.273570    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:12.484915    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:12.485162    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:12.539974    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:12.773829    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:12.982423    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:12.984676    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:13.037932    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:13.274131    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:13.483747    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:13.485331    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:13.537404    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:13.800454    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:13.982801    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:13.985350    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:14.037293    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:14.273787    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:14.484354    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:14.485533    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:14.538006    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:14.774013    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:14.984795    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:14.985180    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:15.044225    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:15.273657    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:15.490116    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:15.490862    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:15.536686    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:15.777796    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:15.983753    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:15.985384    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:16.086251    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:16.273920    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:16.483728    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:16.485337    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:16.537475    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:16.774842    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:16.983679    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:16.985561    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:17.037560    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:17.275477    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:17.486171    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:17.486388    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:17.537452    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:17.774367    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:17.983379    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:17.986410    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:18.037734    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:18.273758    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:18.482816    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:18.484190    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:18.537661    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:18.773347    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:18.986279    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:18.986562    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:19.047464    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:19.277274    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:19.490891    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:19.491155    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:19.538614    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:19.775934    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:19.986356    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:19.986632    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:20.039986    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:20.274173    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:20.511458    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:20.511910    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:20.543455    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:20.774922    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:20.983107    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:20.985649    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:21.040193    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:21.274070    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:21.482963    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:21.484936    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:21.537357    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:21.773524    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:21.984671    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:21.987581    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 18:19:22.084871    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:22.272957    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:22.483594    5650 kapi.go:107] duration metric: took 1m16.00288394s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 18:19:22.483845    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:22.537453    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:22.773772    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:22.982842    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:23.040695    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:23.273505    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:23.483071    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:23.537625    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:23.774063    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:23.982954    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:24.038265    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:24.274086    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:24.482156    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:24.536739    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:24.773740    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:24.981874    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:25.037325    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 18:19:25.274015    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:25.482582    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:25.537168    5650 kapi.go:107] duration metric: took 1m15.503372971s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 18:19:25.540551    5650 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-377325 cluster.
	I1213 18:19:25.543278    5650 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 18:19:25.546075    5650 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 18:19:25.773884    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:25.982315    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:26.273826    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:26.482069    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:26.773380    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:26.982824    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:27.273420    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:27.482772    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:27.772829    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:27.981641    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:28.273451    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:28.482886    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:28.772985    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:28.981854    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:29.273481    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:29.482854    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:29.773170    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:29.982449    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:30.272879    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:30.482852    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:30.773718    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:30.982867    5650 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 18:19:31.273406    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:31.482734    5650 kapi.go:107] duration metric: took 1m25.003850297s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 18:19:31.773977    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:32.273696    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:32.775115    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:33.273880    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:33.773901    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:34.274855    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:34.773622    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:35.272679    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:35.774270    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:36.274190    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:36.775663    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:37.292300    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:37.774009    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:38.273668    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:38.774034    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:39.274789    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:39.774419    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:40.273527    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:40.807898    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:41.273774    5650 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 18:19:41.774065    5650 kapi.go:107] duration metric: took 1m35.004328554s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 18:19:41.777393    5650 out.go:179] * Enabled addons: cloud-spanner, registry-creds, amd-gpu-device-plugin, default-storageclass, storage-provisioner, nvidia-device-plugin, inspektor-gadget, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1213 18:19:41.780333    5650 addons.go:530] duration metric: took 1m41.793448769s for enable addons: enabled=[cloud-spanner registry-creds amd-gpu-device-plugin default-storageclass storage-provisioner nvidia-device-plugin inspektor-gadget ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1213 18:19:41.780401    5650 start.go:247] waiting for cluster config update ...
	I1213 18:19:41.780427    5650 start.go:256] writing updated cluster config ...
	I1213 18:19:41.780747    5650 ssh_runner.go:195] Run: rm -f paused
	I1213 18:19:41.785511    5650 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 18:19:41.788962    5650 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6ct6w" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:41.795078    5650 pod_ready.go:94] pod "coredns-66bc5c9577-6ct6w" is "Ready"
	I1213 18:19:41.795105    5650 pod_ready.go:86] duration metric: took 6.115145ms for pod "coredns-66bc5c9577-6ct6w" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:41.797594    5650 pod_ready.go:83] waiting for pod "etcd-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:41.802306    5650 pod_ready.go:94] pod "etcd-addons-377325" is "Ready"
	I1213 18:19:41.802332    5650 pod_ready.go:86] duration metric: took 4.710369ms for pod "etcd-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:41.804949    5650 pod_ready.go:83] waiting for pod "kube-apiserver-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:41.809733    5650 pod_ready.go:94] pod "kube-apiserver-addons-377325" is "Ready"
	I1213 18:19:41.809762    5650 pod_ready.go:86] duration metric: took 4.786735ms for pod "kube-apiserver-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:41.812359    5650 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:42.190858    5650 pod_ready.go:94] pod "kube-controller-manager-addons-377325" is "Ready"
	I1213 18:19:42.190893    5650 pod_ready.go:86] duration metric: took 378.506551ms for pod "kube-controller-manager-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:42.390401    5650 pod_ready.go:83] waiting for pod "kube-proxy-m8qkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:42.789154    5650 pod_ready.go:94] pod "kube-proxy-m8qkk" is "Ready"
	I1213 18:19:42.789224    5650 pod_ready.go:86] duration metric: took 398.795001ms for pod "kube-proxy-m8qkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:42.989391    5650 pod_ready.go:83] waiting for pod "kube-scheduler-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:43.389684    5650 pod_ready.go:94] pod "kube-scheduler-addons-377325" is "Ready"
	I1213 18:19:43.389718    5650 pod_ready.go:86] duration metric: took 400.257469ms for pod "kube-scheduler-addons-377325" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 18:19:43.389731    5650 pod_ready.go:40] duration metric: took 1.604186065s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 18:19:43.783034    5650 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 18:19:43.794233    5650 out.go:179] * Done! kubectl is now configured to use "addons-377325" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 18:19:45 addons-377325 crio[833]: time="2025-12-13T18:19:45.210337887Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 18:19:47 addons-377325 crio[833]: time="2025-12-13T18:19:47.369596853Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5893ceb7-f9f5-467a-8449-19081b31aec9 name=/runtime.v1.ImageService/PullImage
	Dec 13 18:19:47 addons-377325 crio[833]: time="2025-12-13T18:19:47.370492134Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0430710b-8bd1-41ea-a6f6-a0b645badcfb name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:19:47 addons-377325 crio[833]: time="2025-12-13T18:19:47.372164139Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4b6a2a82-0efe-40c9-b3d1-1083f4a99487 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:19:47 addons-377325 crio[833]: time="2025-12-13T18:19:47.377812874Z" level=info msg="Creating container: default/busybox/busybox" id=8296c970-4821-4b70-9454-05bf3528fbb3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 18:19:47 addons-377325 crio[833]: time="2025-12-13T18:19:47.377941015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 18:19:47 addons-377325 crio[833]: time="2025-12-13T18:19:47.384712061Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 18:19:47 addons-377325 crio[833]: time="2025-12-13T18:19:47.385439029Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 18:19:47 addons-377325 crio[833]: time="2025-12-13T18:19:47.403774757Z" level=info msg="Created container 9ef1f75ce2915dcfc931c17913c7a8997a91f8787218846c0a3a8b1857866413: default/busybox/busybox" id=8296c970-4821-4b70-9454-05bf3528fbb3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 18:19:47 addons-377325 crio[833]: time="2025-12-13T18:19:47.405204264Z" level=info msg="Starting container: 9ef1f75ce2915dcfc931c17913c7a8997a91f8787218846c0a3a8b1857866413" id=0945833a-6c16-4c19-b233-6267e27e6d1e name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 18:19:47 addons-377325 crio[833]: time="2025-12-13T18:19:47.407060057Z" level=info msg="Started container" PID=4871 containerID=9ef1f75ce2915dcfc931c17913c7a8997a91f8787218846c0a3a8b1857866413 description=default/busybox/busybox id=0945833a-6c16-4c19-b233-6267e27e6d1e name=/runtime.v1.RuntimeService/StartContainer sandboxID=1c153b6570c0d0ba3ae2c2f42bd01a9d33f90e038b2377f2bd88fdabf8ba0c6a
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.765568655Z" level=info msg="Removing container: 2941325e95f176cea74c2a80dfb45e004b8e33250946fed57e9e7920e11773f4" id=d01229fc-02f2-440f-81f5-967642ffd6be name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.774500861Z" level=info msg="Error loading conmon cgroup of container 2941325e95f176cea74c2a80dfb45e004b8e33250946fed57e9e7920e11773f4: cgroup deleted" id=d01229fc-02f2-440f-81f5-967642ffd6be name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.779873261Z" level=info msg="Removed container 2941325e95f176cea74c2a80dfb45e004b8e33250946fed57e9e7920e11773f4: gcp-auth/gcp-auth-certs-patch-rclbj/patch" id=d01229fc-02f2-440f-81f5-967642ffd6be name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.782669456Z" level=info msg="Removing container: 007d0527b2841baf1cf1ae1a5cd308367ddde8ff8bd9574a79d1f209bd538e19" id=f2f5ed66-6691-4e6d-b21e-4273c84e087e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.785184236Z" level=info msg="Error loading conmon cgroup of container 007d0527b2841baf1cf1ae1a5cd308367ddde8ff8bd9574a79d1f209bd538e19: cgroup deleted" id=f2f5ed66-6691-4e6d-b21e-4273c84e087e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.790114554Z" level=info msg="Removed container 007d0527b2841baf1cf1ae1a5cd308367ddde8ff8bd9574a79d1f209bd538e19: gcp-auth/gcp-auth-certs-create-nmt5h/create" id=f2f5ed66-6691-4e6d-b21e-4273c84e087e name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.794088347Z" level=info msg="Stopping pod sandbox: eccdd53df58b3a01f11ebf73f3064020446f68a30b099684bc8ceea364ae93c4" id=c57fb667-2af1-47ef-a4f8-939f162a3a58 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.794279783Z" level=info msg="Stopped pod sandbox (already stopped): eccdd53df58b3a01f11ebf73f3064020446f68a30b099684bc8ceea364ae93c4" id=c57fb667-2af1-47ef-a4f8-939f162a3a58 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.794765016Z" level=info msg="Removing pod sandbox: eccdd53df58b3a01f11ebf73f3064020446f68a30b099684bc8ceea364ae93c4" id=2f18b0cc-cdb0-4f9c-95a4-d81af06981f8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.801854229Z" level=info msg="Removed pod sandbox: eccdd53df58b3a01f11ebf73f3064020446f68a30b099684bc8ceea364ae93c4" id=2f18b0cc-cdb0-4f9c-95a4-d81af06981f8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.802755885Z" level=info msg="Stopping pod sandbox: f2f55f3d1824337f3d5a84408aefd63f548757cf3dd4b0444c985d7234595943" id=a9631ddc-a5e2-4dc1-b753-55ff6432fd6a name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.802918775Z" level=info msg="Stopped pod sandbox (already stopped): f2f55f3d1824337f3d5a84408aefd63f548757cf3dd4b0444c985d7234595943" id=a9631ddc-a5e2-4dc1-b753-55ff6432fd6a name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.803599965Z" level=info msg="Removing pod sandbox: f2f55f3d1824337f3d5a84408aefd63f548757cf3dd4b0444c985d7234595943" id=10e84083-974b-4605-a373-0fea193b269b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 18:19:54 addons-377325 crio[833]: time="2025-12-13T18:19:54.810047045Z" level=info msg="Removed pod sandbox: f2f55f3d1824337f3d5a84408aefd63f548757cf3dd4b0444c985d7234595943" id=10e84083-974b-4605-a373-0fea193b269b name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	9ef1f75ce2915       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   1c153b6570c0d       busybox                                     default
	42d706a88ed1b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   4432841cc128d       csi-hostpathplugin-rlkjk                    kube-system
	1c4f8a1dece34       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          16 seconds ago       Running             csi-provisioner                          0                   4432841cc128d       csi-hostpathplugin-rlkjk                    kube-system
	f361bc25cf32b       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            18 seconds ago       Running             liveness-probe                           0                   4432841cc128d       csi-hostpathplugin-rlkjk                    kube-system
	352bbc3896f30       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           19 seconds ago       Running             hostpath                                 0                   4432841cc128d       csi-hostpathplugin-rlkjk                    kube-system
	6e046f2674c0c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            20 seconds ago       Running             gadget                                   0                   eb3aa5538db7e       gadget-btw98                                gadget
	2e7fb6d0ca7ac       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                23 seconds ago       Running             node-driver-registrar                    0                   4432841cc128d       csi-hostpathplugin-rlkjk                    kube-system
	8554f7b9834d4       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             25 seconds ago       Running             controller                               0                   a51c674e698f8       ingress-nginx-controller-85d4c799dd-422pz   ingress-nginx
	18b8bafbb6153       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 31 seconds ago       Running             gcp-auth                                 0                   b7540fb4e90f8       gcp-auth-78565c9fb4-t8vr8                   gcp-auth
	cd33fc9243f51       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              34 seconds ago       Running             registry-proxy                           0                   bc05f87b0190e       registry-proxy-zxcm2                        kube-system
	3946c9e84e3da       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        38 seconds ago       Running             metrics-server                           0                   d3ab1879b99df       metrics-server-85b7d694d7-xj9z5             kube-system
	98b036da9d856       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               41 seconds ago       Running             cloud-spanner-emulator                   0                   06b83f1edd870       cloud-spanner-emulator-5bdddb765-vxvnq      default
	054c83d5a1f87       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              45 seconds ago       Running             csi-resizer                              0                   48b2dba852e46       csi-hostpath-resizer-0                      kube-system
	87610c2eb50cf       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     46 seconds ago       Running             nvidia-device-plugin-ctr                 0                   da9fb2130cdef       nvidia-device-plugin-daemonset-qfgpv        kube-system
	52764c4f81789       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           50 seconds ago       Running             registry                                 0                   73e04023318a9       registry-6b586f9694-b6lxz                   kube-system
	0a800ad4dd0e9       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      52 seconds ago       Running             volume-snapshot-controller               0                   77ee3184ef830       snapshot-controller-7d9fbc56b8-4ddpz        kube-system
	eb2db55011acb       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             52 seconds ago       Running             local-path-provisioner                   0                   344cfe54e76ab       local-path-provisioner-648f6765c9-pgkvr     local-path-storage
	7dddc3bceec5a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   53 seconds ago       Running             csi-external-health-monitor-controller   0                   4432841cc128d       csi-hostpathplugin-rlkjk                    kube-system
	599b8ce504818       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      55 seconds ago       Running             volume-snapshot-controller               0                   d381af8b77d0d       snapshot-controller-7d9fbc56b8-sl9gg        kube-system
	5abc99c42c2ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   57 seconds ago       Exited              patch                                    0                   8e26ec8a10e0b       ingress-nginx-admission-patch-sqd6d         ingress-nginx
	228e6f9a0fded       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               57 seconds ago       Running             minikube-ingress-dns                     0                   3c4e107707b77       kube-ingress-dns-minikube                   kube-system
	388dcec00ffb8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              create                                   0                   b7e6f5e285471       ingress-nginx-admission-create-bhb5h        ingress-nginx
	0d77a566cb2c6       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   8f085c673c00a       csi-hostpath-attacher-0                     kube-system
	4ff2b97ce30ed       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   988c876e76055       yakd-dashboard-5ff678cb9-4g4kw              yakd-dashboard
	dae0269172396       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   cadf8295cc7d2       storage-provisioner                         kube-system
	c37b9bf999a3f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   90c831d769b08       coredns-66bc5c9577-6ct6w                    kube-system
	57a4c5bd3b052       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             About a minute ago   Running             kube-proxy                               0                   4fd28f9ac87c8       kube-proxy-m8qkk                            kube-system
	05178b358a31f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             About a minute ago   Running             kindnet-cni                              0                   8cb043565585f       kindnet-rtw78                               kube-system
	4c0b427c73b3b       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   d7265bd2288d4       kube-scheduler-addons-377325                kube-system
	003f9ee38f6b4       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   e49a21d77735d       kube-controller-manager-addons-377325       kube-system
	9f44e406e70a4       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   176c3b729eff9       etcd-addons-377325                          kube-system
	3edde11a7e903       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   d54df531fe9b8       kube-apiserver-addons-377325                kube-system
	
	
	==> coredns [c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee] <==
	[INFO] 10.244.0.14:36238 - 35633 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000144018s
	[INFO] 10.244.0.14:36238 - 47919 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002436264s
	[INFO] 10.244.0.14:36238 - 62493 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002835186s
	[INFO] 10.244.0.14:36238 - 58385 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000130545s
	[INFO] 10.244.0.14:36238 - 4890 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000177988s
	[INFO] 10.244.0.14:48046 - 27896 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000159682s
	[INFO] 10.244.0.14:48046 - 27427 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000137938s
	[INFO] 10.244.0.14:37118 - 18009 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099628s
	[INFO] 10.244.0.14:37118 - 17788 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000187563s
	[INFO] 10.244.0.14:54220 - 46876 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000110812s
	[INFO] 10.244.0.14:54220 - 46647 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000200273s
	[INFO] 10.244.0.14:47250 - 36227 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001606132s
	[INFO] 10.244.0.14:47250 - 36022 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001543954s
	[INFO] 10.244.0.14:45882 - 37628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00013697s
	[INFO] 10.244.0.14:45882 - 37208 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000075291s
	[INFO] 10.244.0.20:55855 - 8897 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000165303s
	[INFO] 10.244.0.20:33910 - 20049 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000080108s
	[INFO] 10.244.0.20:52805 - 31624 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096468s
	[INFO] 10.244.0.20:33823 - 9905 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000073371s
	[INFO] 10.244.0.20:40579 - 48271 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096559s
	[INFO] 10.244.0.20:46652 - 56081 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000071s
	[INFO] 10.244.0.20:38864 - 53583 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002023918s
	[INFO] 10.244.0.20:59060 - 22171 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001739411s
	[INFO] 10.244.0.20:35794 - 42123 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.009097607s
	[INFO] 10.244.0.20:42467 - 30138 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.008140943s
	
	
	==> describe nodes <==
	Name:               addons-377325
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-377325
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=addons-377325
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T18_17_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-377325
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-377325"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 18:17:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-377325
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 18:19:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 18:19:37 +0000   Sat, 13 Dec 2025 18:17:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 18:19:37 +0000   Sat, 13 Dec 2025 18:17:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 18:19:37 +0000   Sat, 13 Dec 2025 18:17:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 18:19:37 +0000   Sat, 13 Dec 2025 18:18:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-377325
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                5b0dab5a-1c6e-44ba-8710-19d123f14c68
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-5bdddb765-vxvnq       0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  gadget                      gadget-btw98                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  gcp-auth                    gcp-auth-78565c9fb4-t8vr8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-422pz    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         110s
	  kube-system                 coredns-66bc5c9577-6ct6w                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     116s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 csi-hostpathplugin-rlkjk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 etcd-addons-377325                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-rtw78                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-addons-377325                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-addons-377325        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-m8qkk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-addons-377325                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 metrics-server-85b7d694d7-xj9z5              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         111s
	  kube-system                 nvidia-device-plugin-daemonset-qfgpv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 registry-6b586f9694-b6lxz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 registry-creds-764b6fb674-f9qf2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 registry-proxy-zxcm2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 snapshot-controller-7d9fbc56b8-4ddpz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 snapshot-controller-7d9fbc56b8-sl9gg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  local-path-storage          local-path-provisioner-648f6765c9-pgkvr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-4g4kw               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 115s  kube-proxy       
	  Normal   Starting                 2m2s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m2s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m2s  kubelet          Node addons-377325 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m2s  kubelet          Node addons-377325 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m2s  kubelet          Node addons-377325 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           117s  node-controller  Node addons-377325 event: Registered Node addons-377325 in Controller
	  Normal   NodeReady                75s   kubelet          Node addons-377325 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0] <==
	{"level":"warn","ts":"2025-12-13T18:17:50.876713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.887073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.905286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.922096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.939137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.956980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.974506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:50.994464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.009085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.034541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.050967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.064639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.087235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.106405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.163188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.207117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.222412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.247196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:17:51.309621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:18:07.057661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:18:07.073706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:18:29.173439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:18:29.191066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:18:29.229334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T18:18:29.241413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39040","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [18b8bafbb61534744f02d69567b274020e1f069ec20206f130bb2a96bbfd9099] <==
	2025/12/13 18:19:25 GCP Auth Webhook started!
	2025/12/13 18:19:44 Ready to marshal response ...
	2025/12/13 18:19:44 Ready to write response ...
	2025/12/13 18:19:44 Ready to marshal response ...
	2025/12/13 18:19:44 Ready to write response ...
	2025/12/13 18:19:44 Ready to marshal response ...
	2025/12/13 18:19:44 Ready to write response ...
	
	
	==> kernel <==
	 18:19:56 up  1:02,  0 user,  load average: 2.77, 1.45, 0.58
	Linux addons-377325 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a] <==
	I1213 18:18:01.321590       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 18:18:31.320933       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 18:18:31.320933       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 18:18:31.321109       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1213 18:18:31.321847       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1213 18:18:32.921821       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 18:18:32.921852       1 metrics.go:72] Registering metrics
	I1213 18:18:32.921964       1 controller.go:711] "Syncing nftables rules"
	E1213 18:18:32.922010       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I1213 18:18:41.326809       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:18:41.326864       1 main.go:301] handling current node
	I1213 18:18:51.320349       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:18:51.320593       1 main.go:301] handling current node
	I1213 18:19:01.321472       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:19:01.321987       1 main.go:301] handling current node
	I1213 18:19:11.319644       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:19:11.319678       1 main.go:301] handling current node
	I1213 18:19:21.319877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:19:21.319908       1 main.go:301] handling current node
	I1213 18:19:31.320591       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:19:31.320623       1 main.go:301] handling current node
	I1213 18:19:41.320528       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:19:41.320562       1 main.go:301] handling current node
	I1213 18:19:51.321155       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 18:19:51.321239       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33] <==
	E1213 18:18:41.783616       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.109.137:443: connect: connection refused" logger="UnhandledError"
	W1213 18:18:41.784037       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.109.137:443: connect: connection refused
	E1213 18:18:41.784070       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.109.137:443: connect: connection refused" logger="UnhandledError"
	W1213 18:18:41.845163       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.109.137:443: connect: connection refused
	E1213 18:18:41.845206       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.109.137:443: connect: connection refused" logger="UnhandledError"
	W1213 18:19:05.849952       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 18:19:05.850006       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 18:19:05.850026       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 18:19:05.851133       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 18:19:05.851287       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 18:19:05.851303       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 18:19:20.442961       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 18:19:20.443026       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1213 18:19:20.443497       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.132.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.132.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.132.8:443: connect: connection refused" logger="UnhandledError"
	E1213 18:19:20.444691       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.132.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.132.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.132.8:443: connect: connection refused" logger="UnhandledError"
	E1213 18:19:20.450163       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.132.8:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.132.8:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.132.8:443: connect: connection refused" logger="UnhandledError"
	I1213 18:19:20.578887       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 18:19:54.117536       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42996: use of closed network connection
	E1213 18:19:54.482076       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43034: use of closed network connection
	
	
	==> kube-controller-manager [003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c] <==
	I1213 18:17:59.204328       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 18:17:59.204410       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 18:17:59.204674       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 18:17:59.204798       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 18:17:59.204838       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 18:17:59.207751       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 18:17:59.207961       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 18:17:59.208445       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 18:17:59.208924       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 18:17:59.212216       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 18:17:59.214345       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 18:17:59.219550       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 18:17:59.230907       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 18:17:59.233123       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	E1213 18:18:05.123938       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1213 18:18:29.165946       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 18:18:29.166097       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1213 18:18:29.166136       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1213 18:18:29.193894       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1213 18:18:29.216351       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 18:18:29.266731       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 18:18:29.321586       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 18:18:44.155535       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1213 18:18:59.279152       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 18:18:59.330769       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338] <==
	I1213 18:18:01.112133       1 server_linux.go:53] "Using iptables proxy"
	I1213 18:18:01.194915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 18:18:01.295090       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 18:18:01.295122       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 18:18:01.295190       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 18:18:01.342579       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 18:18:01.342633       1 server_linux.go:132] "Using iptables Proxier"
	I1213 18:18:01.352979       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 18:18:01.364478       1 server.go:527] "Version info" version="v1.34.2"
	I1213 18:18:01.364504       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 18:18:01.387438       1 config.go:200] "Starting service config controller"
	I1213 18:18:01.387461       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 18:18:01.387543       1 config.go:106] "Starting endpoint slice config controller"
	I1213 18:18:01.387550       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 18:18:01.387565       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 18:18:01.387570       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 18:18:01.388618       1 config.go:309] "Starting node config controller"
	I1213 18:18:01.388630       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 18:18:01.388637       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 18:18:01.487801       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 18:18:01.487843       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 18:18:01.487882       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892] <==
	E1213 18:17:52.294117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 18:17:52.294171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 18:17:52.294230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 18:17:52.294271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 18:17:52.294317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 18:17:52.294361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 18:17:52.294437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 18:17:52.294460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 18:17:52.294505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 18:17:52.294588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 18:17:52.294588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 18:17:52.294651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 18:17:52.294655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 18:17:52.294703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 18:17:52.294790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 18:17:52.294874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 18:17:53.123418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 18:17:53.146543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 18:17:53.146786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 18:17:53.158421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 18:17:53.172795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 18:17:53.240887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 18:17:53.243072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 18:17:53.284644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1213 18:17:55.663649       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 18:19:16 addons-377325 kubelet[1270]: I1213 18:19:16.370681    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-5bdddb765-vxvnq" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 18:19:18 addons-377325 kubelet[1270]: I1213 18:19:18.396663    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/metrics-server-85b7d694d7-xj9z5" podStartSLOduration=38.836987006 podStartE2EDuration="1m13.396644508s" podCreationTimestamp="2025-12-13 18:18:05 +0000 UTC" firstStartedPulling="2025-12-13 18:18:42.829377211 +0000 UTC m=+48.220722422" lastFinishedPulling="2025-12-13 18:19:17.389034713 +0000 UTC m=+82.780379924" observedRunningTime="2025-12-13 18:19:18.395485052 +0000 UTC m=+83.786830263" watchObservedRunningTime="2025-12-13 18:19:18.396644508 +0000 UTC m=+83.787989711"
	Dec 13 18:19:18 addons-377325 kubelet[1270]: I1213 18:19:18.397380    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/cloud-spanner-emulator-5bdddb765-vxvnq" podStartSLOduration=43.054902311 podStartE2EDuration="1m15.397357953s" podCreationTimestamp="2025-12-13 18:18:03 +0000 UTC" firstStartedPulling="2025-12-13 18:18:42.778462411 +0000 UTC m=+48.169807622" lastFinishedPulling="2025-12-13 18:19:15.120918012 +0000 UTC m=+80.512263264" observedRunningTime="2025-12-13 18:19:15.383876902 +0000 UTC m=+80.775222146" watchObservedRunningTime="2025-12-13 18:19:18.397357953 +0000 UTC m=+83.788703156"
	Dec 13 18:19:22 addons-377325 kubelet[1270]: I1213 18:19:22.393323    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zxcm2" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 18:19:23 addons-377325 kubelet[1270]: I1213 18:19:23.038283    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-zxcm2" podStartSLOduration=3.083168448 podStartE2EDuration="42.038261206s" podCreationTimestamp="2025-12-13 18:18:41 +0000 UTC" firstStartedPulling="2025-12-13 18:18:42.980621016 +0000 UTC m=+48.371966218" lastFinishedPulling="2025-12-13 18:19:21.935713765 +0000 UTC m=+87.327058976" observedRunningTime="2025-12-13 18:19:22.414052007 +0000 UTC m=+87.805397226" watchObservedRunningTime="2025-12-13 18:19:23.038261206 +0000 UTC m=+88.429606425"
	Dec 13 18:19:23 addons-377325 kubelet[1270]: I1213 18:19:23.398184    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zxcm2" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 18:19:24 addons-377325 kubelet[1270]: I1213 18:19:24.726172    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0604075f-e8e1-4679-9048-4173505f5727" path="/var/lib/kubelet/pods/0604075f-e8e1-4679-9048-4173505f5727/volumes"
	Dec 13 18:19:31 addons-377325 kubelet[1270]: I1213 18:19:31.037143    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-t8vr8" podStartSLOduration=55.363660309 podStartE2EDuration="1m22.037098907s" podCreationTimestamp="2025-12-13 18:18:09 +0000 UTC" firstStartedPulling="2025-12-13 18:18:58.570513634 +0000 UTC m=+63.961858845" lastFinishedPulling="2025-12-13 18:19:25.243952232 +0000 UTC m=+90.635297443" observedRunningTime="2025-12-13 18:19:25.425213924 +0000 UTC m=+90.816559135" watchObservedRunningTime="2025-12-13 18:19:31.037098907 +0000 UTC m=+96.428444118"
	Dec 13 18:19:32 addons-377325 kubelet[1270]: I1213 18:19:32.727849    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f1b573-0d1e-4c59-812b-81d533fa3ea7" path="/var/lib/kubelet/pods/d2f1b573-0d1e-4c59-812b-81d533fa3ea7/volumes"
	Dec 13 18:19:36 addons-377325 kubelet[1270]: I1213 18:19:36.486531    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-422pz" podStartSLOduration=57.801687483 podStartE2EDuration="1m30.486514198s" podCreationTimestamp="2025-12-13 18:18:06 +0000 UTC" firstStartedPulling="2025-12-13 18:18:58.571821581 +0000 UTC m=+63.963166784" lastFinishedPulling="2025-12-13 18:19:31.256648297 +0000 UTC m=+96.647993499" observedRunningTime="2025-12-13 18:19:31.462469876 +0000 UTC m=+96.853815079" watchObservedRunningTime="2025-12-13 18:19:36.486514198 +0000 UTC m=+101.877859401"
	Dec 13 18:19:38 addons-377325 kubelet[1270]: I1213 18:19:38.929996    1270 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 13 18:19:38 addons-377325 kubelet[1270]: I1213 18:19:38.930060    1270 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 13 18:19:39 addons-377325 kubelet[1270]: I1213 18:19:39.964968    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-btw98" podStartSLOduration=69.196663444 podStartE2EDuration="1m34.96494996s" podCreationTimestamp="2025-12-13 18:18:05 +0000 UTC" firstStartedPulling="2025-12-13 18:19:09.853665597 +0000 UTC m=+75.245010800" lastFinishedPulling="2025-12-13 18:19:35.621952105 +0000 UTC m=+101.013297316" observedRunningTime="2025-12-13 18:19:36.489252325 +0000 UTC m=+101.880597536" watchObservedRunningTime="2025-12-13 18:19:39.96494996 +0000 UTC m=+105.356295155"
	Dec 13 18:19:43 addons-377325 kubelet[1270]: I1213 18:19:43.468053    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-rlkjk" podStartSLOduration=3.96902288 podStartE2EDuration="1m2.468033616s" podCreationTimestamp="2025-12-13 18:18:41 +0000 UTC" firstStartedPulling="2025-12-13 18:18:42.622458153 +0000 UTC m=+48.013803356" lastFinishedPulling="2025-12-13 18:19:41.121468889 +0000 UTC m=+106.512814092" observedRunningTime="2025-12-13 18:19:41.531993087 +0000 UTC m=+106.923338314" watchObservedRunningTime="2025-12-13 18:19:43.468033616 +0000 UTC m=+108.859378819"
	Dec 13 18:19:44 addons-377325 kubelet[1270]: I1213 18:19:44.916074    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4987a62a-ffa1-4bce-ada0-94e799629c3e-gcp-creds\") pod \"busybox\" (UID: \"4987a62a-ffa1-4bce-ada0-94e799629c3e\") " pod="default/busybox"
	Dec 13 18:19:44 addons-377325 kubelet[1270]: I1213 18:19:44.916142    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttqjp\" (UniqueName: \"kubernetes.io/projected/4987a62a-ffa1-4bce-ada0-94e799629c3e-kube-api-access-ttqjp\") pod \"busybox\" (UID: \"4987a62a-ffa1-4bce-ada0-94e799629c3e\") " pod="default/busybox"
	Dec 13 18:19:45 addons-377325 kubelet[1270]: E1213 18:19:45.750548    1270 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 13 18:19:45 addons-377325 kubelet[1270]: E1213 18:19:45.750797    1270 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e714a411-3862-4ffa-a880-421fa8708466-gcr-creds podName:e714a411-3862-4ffa-a880-421fa8708466 nodeName:}" failed. No retries permitted until 2025-12-13 18:20:49.750775469 +0000 UTC m=+175.142120680 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/e714a411-3862-4ffa-a880-421fa8708466-gcr-creds") pod "registry-creds-764b6fb674-f9qf2" (UID: "e714a411-3862-4ffa-a880-421fa8708466") : secret "registry-creds-gcr" not found
	Dec 13 18:19:47 addons-377325 kubelet[1270]: I1213 18:19:47.549400    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.3819869630000001 podStartE2EDuration="3.549382548s" podCreationTimestamp="2025-12-13 18:19:44 +0000 UTC" firstStartedPulling="2025-12-13 18:19:45.203935928 +0000 UTC m=+110.595281131" lastFinishedPulling="2025-12-13 18:19:47.371331513 +0000 UTC m=+112.762676716" observedRunningTime="2025-12-13 18:19:47.548566538 +0000 UTC m=+112.939911774" watchObservedRunningTime="2025-12-13 18:19:47.549382548 +0000 UTC m=+112.940727751"
	Dec 13 18:19:54 addons-377325 kubelet[1270]: I1213 18:19:54.757842    1270 scope.go:117] "RemoveContainer" containerID="2941325e95f176cea74c2a80dfb45e004b8e33250946fed57e9e7920e11773f4"
	Dec 13 18:19:54 addons-377325 kubelet[1270]: I1213 18:19:54.780958    1270 scope.go:117] "RemoveContainer" containerID="007d0527b2841baf1cf1ae1a5cd308367ddde8ff8bd9574a79d1f209bd538e19"
	Dec 13 18:19:54 addons-377325 kubelet[1270]: E1213 18:19:54.930081    1270 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e0a597b46960213a5d2ef76395e1a096e93817f1a62cda301822ef723e9291ca/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e0a597b46960213a5d2ef76395e1a096e93817f1a62cda301822ef723e9291ca/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-create-nmt5h_0604075f-e8e1-4679-9048-4173505f5727/create/0.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-create-nmt5h_0604075f-e8e1-4679-9048-4173505f5727/create/0.log: no such file or directory
	Dec 13 18:19:54 addons-377325 kubelet[1270]: E1213 18:19:54.940875    1270 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0fdc179a08d8f7219cdfebd35d6d1f9d795eaf79360119ea53e747bad1406289/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0fdc179a08d8f7219cdfebd35d6d1f9d795eaf79360119ea53e747bad1406289/diff: no such file or directory, extraDiskErr: <nil>
	Dec 13 18:19:54 addons-377325 kubelet[1270]: E1213 18:19:54.969484    1270 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4de533abab9f04fe74794eb9801b00941ced3a794dea073ad3b647a455602623/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4de533abab9f04fe74794eb9801b00941ced3a794dea073ad3b647a455602623/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-rclbj_d2f1b573-0d1e-4c59-812b-81d533fa3ea7/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-rclbj_d2f1b573-0d1e-4c59-812b-81d533fa3ea7/patch/1.log: no such file or directory
	Dec 13 18:19:54 addons-377325 kubelet[1270]: E1213 18:19:54.980884    1270 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/dafbb49f5b484a8497225ae834f898128211f3b529f271776c8dbedb2109b806/diff" to get inode usage: stat /var/lib/containers/storage/overlay/dafbb49f5b484a8497225ae834f898128211f3b529f271776c8dbedb2109b806/diff: no such file or directory, extraDiskErr: <nil>
	
	
	==> storage-provisioner [dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45] <==
	W1213 18:19:31.456186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:33.460171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:33.465267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:35.468321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:35.475260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:37.480381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:37.486204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:39.489560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:39.494708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:41.501658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:41.506601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:43.510694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:43.520899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:45.525972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:45.531205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:47.535421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:47.541077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:49.543977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:49.550570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:51.553977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:51.558838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:53.563318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:53.570641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:55.574284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 18:19:55.585115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-377325 -n addons-377325
helpers_test.go:270: (dbg) Run:  kubectl --context addons-377325 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-bhb5h ingress-nginx-admission-patch-sqd6d registry-creds-764b6fb674-f9qf2
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-377325 describe pod ingress-nginx-admission-create-bhb5h ingress-nginx-admission-patch-sqd6d registry-creds-764b6fb674-f9qf2
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-377325 describe pod ingress-nginx-admission-create-bhb5h ingress-nginx-admission-patch-sqd6d registry-creds-764b6fb674-f9qf2: exit status 1 (96.009076ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bhb5h" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sqd6d" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-f9qf2" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-377325 describe pod ingress-nginx-admission-create-bhb5h ingress-nginx-admission-patch-sqd6d registry-creds-764b6fb674-f9qf2: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable headlamp --alsologtostderr -v=1: exit status 11 (253.273448ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:19:57.957076   12177 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:19:57.957224   12177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:19:57.957236   12177 out.go:374] Setting ErrFile to fd 2...
	I1213 18:19:57.957242   12177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:19:57.957492   12177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:19:57.957813   12177 mustload.go:66] Loading cluster: addons-377325
	I1213 18:19:57.958212   12177 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:19:57.958235   12177 addons.go:622] checking whether the cluster is paused
	I1213 18:19:57.958349   12177 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:19:57.958364   12177 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:19:57.959197   12177 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:19:57.975829   12177 ssh_runner.go:195] Run: systemctl --version
	I1213 18:19:57.975968   12177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:19:57.992769   12177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:19:58.100399   12177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:19:58.100558   12177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:19:58.130666   12177 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:19:58.130688   12177 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:19:58.130693   12177 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:19:58.130697   12177 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:19:58.130701   12177 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:19:58.130705   12177 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:19:58.130708   12177 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:19:58.130711   12177 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:19:58.130714   12177 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:19:58.130720   12177 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:19:58.130723   12177 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:19:58.130727   12177 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:19:58.130730   12177 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:19:58.130734   12177 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:19:58.130737   12177 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:19:58.130748   12177 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:19:58.130751   12177 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:19:58.130758   12177 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:19:58.130765   12177 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:19:58.130769   12177 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:19:58.130773   12177 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:19:58.130783   12177 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:19:58.130786   12177 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:19:58.130789   12177 cri.go:89] found id: ""
	I1213 18:19:58.130839   12177 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:19:58.145941   12177 out.go:203] 
	W1213 18:19:58.148987   12177 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:19:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:19:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:19:58.149092   12177 out.go:285] * 
	* 
	W1213 18:19:58.152857   12177 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:19:58.155750   12177 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-vxvnq" [8438e2e4-5c26-488f-ace8-d7537f2fb4b8] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003994772s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (290.004072ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:21:05.566914   14065 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:21:05.567219   14065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:21:05.567240   14065 out.go:374] Setting ErrFile to fd 2...
	I1213 18:21:05.567246   14065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:21:05.569039   14065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:21:05.569373   14065 mustload.go:66] Loading cluster: addons-377325
	I1213 18:21:05.569755   14065 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:21:05.569773   14065 addons.go:622] checking whether the cluster is paused
	I1213 18:21:05.569884   14065 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:21:05.569899   14065 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:21:05.570373   14065 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:21:05.600756   14065 ssh_runner.go:195] Run: systemctl --version
	I1213 18:21:05.600825   14065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:21:05.626263   14065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:21:05.739434   14065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:21:05.739521   14065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:21:05.768871   14065 cri.go:89] found id: "7335e54d52b88b155633adab6e04eb259c2213158e5470ec00d87cf5228ac9ef"
	I1213 18:21:05.768898   14065 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:21:05.768903   14065 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:21:05.768907   14065 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:21:05.768910   14065 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:21:05.768915   14065 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:21:05.768918   14065 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:21:05.768921   14065 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:21:05.768924   14065 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:21:05.768933   14065 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:21:05.768937   14065 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:21:05.768939   14065 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:21:05.768942   14065 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:21:05.768945   14065 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:21:05.768948   14065 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:21:05.768957   14065 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:21:05.768961   14065 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:21:05.768966   14065 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:21:05.768969   14065 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:21:05.768972   14065 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:21:05.768977   14065 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:21:05.768980   14065 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:21:05.768983   14065 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:21:05.768986   14065 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:21:05.768989   14065 cri.go:89] found id: ""
	I1213 18:21:05.769085   14065 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:21:05.785277   14065 out.go:203] 
	W1213 18:21:05.788078   14065 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:21:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:21:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:21:05.788101   14065 out.go:285] * 
	* 
	W1213 18:21:05.791961   14065 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:21:05.794859   14065 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.38s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-377325 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-377325 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-377325 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [35c15aaa-bb17-48f4-bf2b-aab24fae25db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [35c15aaa-bb17-48f4-bf2b-aab24fae25db] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [35c15aaa-bb17-48f4-bf2b-aab24fae25db] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003022945s
addons_test.go:969: (dbg) Run:  kubectl --context addons-377325 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 ssh "cat /opt/local-path-provisioner/pvc-0724d684-911a-4545-b553-e71f3e94668e_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-377325 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-377325 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (259.044594ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:20:48.422458   13731 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:20:48.422611   13731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:48.422625   13731 out.go:374] Setting ErrFile to fd 2...
	I1213 18:20:48.422631   13731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:48.422885   13731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:20:48.423160   13731 mustload.go:66] Loading cluster: addons-377325
	I1213 18:20:48.423530   13731 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:48.423549   13731 addons.go:622] checking whether the cluster is paused
	I1213 18:20:48.423657   13731 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:48.423673   13731 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:20:48.424173   13731 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:20:48.445883   13731 ssh_runner.go:195] Run: systemctl --version
	I1213 18:20:48.445947   13731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:20:48.463122   13731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:20:48.572671   13731 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:20:48.572763   13731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:20:48.600988   13731 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:20:48.601024   13731 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:20:48.601029   13731 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:20:48.601033   13731 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:20:48.601040   13731 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:20:48.601044   13731 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:20:48.601047   13731 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:20:48.601051   13731 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:20:48.601054   13731 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:20:48.601060   13731 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:20:48.601063   13731 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:20:48.601066   13731 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:20:48.601070   13731 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:20:48.601073   13731 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:20:48.601076   13731 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:20:48.601081   13731 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:20:48.601084   13731 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:20:48.601088   13731 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:20:48.601091   13731 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:20:48.601094   13731 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:20:48.601099   13731 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:20:48.601106   13731 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:20:48.601110   13731 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:20:48.601113   13731 cri.go:89] found id: ""
	I1213 18:20:48.601161   13731 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:20:48.616340   13731 out.go:203] 
	W1213 18:20:48.620695   13731 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:20:48.620779   13731 out.go:285] * 
	* 
	W1213 18:20:48.624644   13731 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:20:48.627913   13731 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.38s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-qfgpv" [0270c6b1-ee5d-4441-ae6f-18e3e0423c29] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.025720959s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (445.543346ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:21:00.134287   13980 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:21:00.139251   13980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:21:00.139325   13980 out.go:374] Setting ErrFile to fd 2...
	I1213 18:21:00.139348   13980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:21:00.139896   13980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:21:00.140331   13980 mustload.go:66] Loading cluster: addons-377325
	I1213 18:21:00.140847   13980 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:21:00.140898   13980 addons.go:622] checking whether the cluster is paused
	I1213 18:21:00.141077   13980 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:21:00.141160   13980 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:21:00.141792   13980 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:21:00.185199   13980 ssh_runner.go:195] Run: systemctl --version
	I1213 18:21:00.185270   13980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:21:00.253141   13980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:21:00.416663   13980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:21:00.416751   13980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:21:00.456004   13980 cri.go:89] found id: "981a98d58e7ca16547486a3ead4db6307f0c17d3916b78cdb6a2add7cbf32bed"
	I1213 18:21:00.456101   13980 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:21:00.456124   13980 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:21:00.456147   13980 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:21:00.456170   13980 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:21:00.456194   13980 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:21:00.456199   13980 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:21:00.456204   13980 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:21:00.456208   13980 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:21:00.456215   13980 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:21:00.456223   13980 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:21:00.456227   13980 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:21:00.456230   13980 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:21:00.456234   13980 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:21:00.456244   13980 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:21:00.456249   13980 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:21:00.456253   13980 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:21:00.456257   13980 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:21:00.456261   13980 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:21:00.456264   13980 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:21:00.456269   13980 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:21:00.456281   13980 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:21:00.456285   13980 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:21:00.456288   13980 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:21:00.456291   13980 cri.go:89] found id: ""
	I1213 18:21:00.456349   13980 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:21:00.481408   13980 out.go:203] 
	W1213 18:21:00.484541   13980 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:21:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:21:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:21:00.484574   13980 out.go:285] * 
	* 
	W1213 18:21:00.488483   13980 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:21:00.491603   13980 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-4g4kw" [b1511f13-7133-429d-b0ad-9a4e14d33e59] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004026971s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-377325 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-377325 addons disable yakd --alsologtostderr -v=1: exit status 11 (387.255692ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:20:54.691737   13836 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:20:54.691891   13836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:54.691897   13836 out.go:374] Setting ErrFile to fd 2...
	I1213 18:20:54.691902   13836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:20:54.692316   13836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:20:54.693348   13836 mustload.go:66] Loading cluster: addons-377325
	I1213 18:20:54.694011   13836 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:54.694030   13836 addons.go:622] checking whether the cluster is paused
	I1213 18:20:54.694198   13836 config.go:182] Loaded profile config "addons-377325": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:20:54.694216   13836 host.go:66] Checking if "addons-377325" exists ...
	I1213 18:20:54.695403   13836 cli_runner.go:164] Run: docker container inspect addons-377325 --format={{.State.Status}}
	I1213 18:20:54.715159   13836 ssh_runner.go:195] Run: systemctl --version
	I1213 18:20:54.715214   13836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-377325
	I1213 18:20:54.736273   13836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/addons-377325/id_rsa Username:docker}
	I1213 18:20:54.853668   13836 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:20:54.853760   13836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:20:54.965950   13836 cri.go:89] found id: "42d706a88ed1b79de9cbc8220725f23931d77e619f962b73e511fcb0df095dcf"
	I1213 18:20:54.966023   13836 cri.go:89] found id: "1c4f8a1dece343dfd524ce5e6db2a545f5bbcabf4319371df21b295d9978f460"
	I1213 18:20:54.966042   13836 cri.go:89] found id: "f361bc25cf32b543c565a18b16afb390523428d84bc14ad86dbacef94cd618f2"
	I1213 18:20:54.966063   13836 cri.go:89] found id: "352bbc3896f303b3b4b4edcffdd2af5759da504004de35069dcbf6701b7ff404"
	I1213 18:20:54.966103   13836 cri.go:89] found id: "2e7fb6d0ca7acd5082666d2f5b93e6106772a93783323b9d70c8dc01cc803b6b"
	I1213 18:20:54.966126   13836 cri.go:89] found id: "cd33fc9243f510b27e6ee856df4a733493114c65ecedbe49e4d2e4db5c3f1a92"
	I1213 18:20:54.966146   13836 cri.go:89] found id: "3946c9e84e3da8e144dd011e9aad2d763f490b97fc556c1432831aec7351dd15"
	I1213 18:20:54.966167   13836 cri.go:89] found id: "054c83d5a1f87b8b0447a3c96743b01e535aa374946a4476cf156bdf43c4634b"
	I1213 18:20:54.966202   13836 cri.go:89] found id: "87610c2eb50cf16ef807cbc696e6152bee0cc4d51e77b5fea346b538dc7ca77a"
	I1213 18:20:54.966222   13836 cri.go:89] found id: "52764c4f81789f7ac0788d22170eef03d2d3c697ff94cd73d0a431f152db2e0d"
	I1213 18:20:54.966242   13836 cri.go:89] found id: "0a800ad4dd0e939ce2cf0fb3f8e2ebd3fe5f4fe340c694377880af81c0b56b82"
	I1213 18:20:54.966273   13836 cri.go:89] found id: "7dddc3bceec5a40164bf2128e718b8dad6c5c34fd5b6a656b28d732b6f85e291"
	I1213 18:20:54.966295   13836 cri.go:89] found id: "599b8ce504818d0e1d93166a52551dc93f2ae22e19769a32db7f1806184b2db0"
	I1213 18:20:54.966315   13836 cri.go:89] found id: "228e6f9a0fdeda7bb28f407279f8c6549c2abaacc0fe0d2fa8dda1eadc802e23"
	I1213 18:20:54.966335   13836 cri.go:89] found id: "0d77a566cb2c6b0cbe174ab2f0537c30a6a6ba2b40472501b4d0cac4192769a2"
	I1213 18:20:54.966366   13836 cri.go:89] found id: "dae0269172396ca9383a18ef3e4f9883c0bb9bf733a41e2b5d7701c47abcbf45"
	I1213 18:20:54.966398   13836 cri.go:89] found id: "c37b9bf999a3f7ee5efa91a30230aedd4764b122566edbc45a747e71e6f77aee"
	I1213 18:20:54.966419   13836 cri.go:89] found id: "57a4c5bd3b052a576bdbd867d075032671fea264b0d670cfb2500f3f7c53a338"
	I1213 18:20:54.966452   13836 cri.go:89] found id: "05178b358a31f960ebd0c746e41e311b3501e13c8dc83cd6e55fdc24cb53d30a"
	I1213 18:20:54.966475   13836 cri.go:89] found id: "4c0b427c73b3bae515b7e2c83cf5f4d2deb0cb58b62c0b619e81dcf9540e3892"
	I1213 18:20:54.966501   13836 cri.go:89] found id: "003f9ee38f6b439a2728ba924bc15a17baba7b021d1b5c661c1157951ed9412c"
	I1213 18:20:54.966536   13836 cri.go:89] found id: "9f44e406e70a42ff3053d90866118a64ff6559f7d4c5878e24daa08620477af0"
	I1213 18:20:54.966560   13836 cri.go:89] found id: "3edde11a7e9037281d89cc0f87b82f0eea20cb96289b644d6152987f1b65be33"
	I1213 18:20:54.966579   13836 cri.go:89] found id: ""
	I1213 18:20:54.966660   13836 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 18:20:55.005405   13836 out.go:203] 
	W1213 18:20:55.009350   13836 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:20:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 18:20:55.009447   13836 out.go:285] * 
	* 
	W1213 18:20:55.013336   13836 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:20:55.017248   13836 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-377325 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-752103 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1213 18:29:44.921841    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:30:12.638953    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:31:42.465278    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:31:42.472258    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:31:42.484033    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:31:42.505440    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:31:42.547017    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:31:42.628529    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:31:42.790019    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:31:43.111689    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:31:43.753853    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:31:45.036181    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:31:47.598542    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:31:52.720689    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:32:02.962977    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:32:23.444556    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:33:04.405950    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:34:26.327862    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:34:44.921218    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-752103 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.682904514s)

                                                
                                                
-- stdout --
	* [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-752103" primary control-plane node in "functional-752103" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - HTTP_PROXY=localhost:42681
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:42681 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-752103 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-752103 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000291647s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001290098s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001290098s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-752103 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-752103
helpers_test.go:244: (dbg) docker inspect functional-752103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	        "Created": "2025-12-13T18:27:36.869398923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:27:36.933863328Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hosts",
	        "LogPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b-json.log",
	        "Name": "/functional-752103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-752103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-752103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	                "LowerDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-752103",
	                "Source": "/var/lib/docker/volumes/functional-752103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-752103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-752103",
	                "name.minikube.sigs.k8s.io": "functional-752103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625ea12887c8956887678f2408d6edd5b98f62bce458a6906f4f662a3001a53b",
	            "SandboxKey": "/var/run/docker/netns/625ea12887c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-752103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:2c:83:4a:30:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84df48e9f7dac8c6a1b67709e5eea216d99d3f16eb50b96c7f0e4a82b3193d56",
	                    "EndpointID": "e69b1f9610d40396647a2d78f0170c31b9cd8e641fc8465e742649cccee8e591",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-752103",
	                        "d72b547cdcc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103: exit status 6 (313.708485ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 18:35:52.804104   38538 status.go:458] kubeconfig endpoint: get endpoint: "functional-752103" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-350101 ssh sudo cat /usr/share/ca-certificates/4637.pem                                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh            │ functional-350101 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image load --daemon kicbase/echo-server:functional-350101 --alsologtostderr                                                             │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh            │ functional-350101 ssh sudo cat /etc/ssl/certs/46372.pem                                                                                                   │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh            │ functional-350101 ssh sudo cat /usr/share/ca-certificates/46372.pem                                                                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls                                                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh            │ functional-350101 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image save kicbase/echo-server:functional-350101 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image rm kicbase/echo-server:functional-350101 --alsologtostderr                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls                                                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ update-context │ functional-350101 update-context --alsologtostderr -v=2                                                                                                   │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls                                                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ update-context │ functional-350101 update-context --alsologtostderr -v=2                                                                                                   │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ update-context │ functional-350101 update-context --alsologtostderr -v=2                                                                                                   │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image save --daemon kicbase/echo-server:functional-350101 --alsologtostderr                                                             │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls --format yaml --alsologtostderr                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls --format short --alsologtostderr                                                                                               │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh            │ functional-350101 ssh pgrep buildkitd                                                                                                                     │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	│ image          │ functional-350101 image ls --format json --alsologtostderr                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls --format table --alsologtostderr                                                                                               │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image build -t localhost/my-image:functional-350101 testdata/build --alsologtostderr                                                    │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls                                                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ delete         │ -p functional-350101                                                                                                                                      │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ start          │ -p functional-752103 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:27:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:27:31.841152   32944 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:27:31.841276   32944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:27:31.841280   32944 out.go:374] Setting ErrFile to fd 2...
	I1213 18:27:31.841284   32944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:27:31.841565   32944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:27:31.841952   32944 out.go:368] Setting JSON to false
	I1213 18:27:31.842814   32944 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4204,"bootTime":1765646248,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:27:31.842870   32944 start.go:143] virtualization:  
	I1213 18:27:31.847177   32944 out.go:179] * [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:27:31.851645   32944 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:27:31.851818   32944 notify.go:221] Checking for updates...
	I1213 18:27:31.858280   32944 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:27:31.861521   32944 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:27:31.864624   32944 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:27:31.867915   32944 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:27:31.871100   32944 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:27:31.874238   32944 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:27:31.895598   32944 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:27:31.895699   32944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:27:31.957857   32944 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 18:27:31.948437692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:27:31.957945   32944 docker.go:319] overlay module found
	I1213 18:27:31.961222   32944 out.go:179] * Using the docker driver based on user configuration
	I1213 18:27:31.964197   32944 start.go:309] selected driver: docker
	I1213 18:27:31.964205   32944 start.go:927] validating driver "docker" against <nil>
	I1213 18:27:31.964217   32944 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:27:31.964932   32944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:27:32.024762   32944 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 18:27:32.014766493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:27:32.024942   32944 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 18:27:32.025178   32944 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 18:27:32.028279   32944 out.go:179] * Using Docker driver with root privileges
	I1213 18:27:32.031309   32944 cni.go:84] Creating CNI manager for ""
	I1213 18:27:32.031366   32944 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:27:32.031373   32944 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 18:27:32.031463   32944 start.go:353] cluster config:
	{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:27:32.036806   32944 out.go:179] * Starting "functional-752103" primary control-plane node in "functional-752103" cluster
	I1213 18:27:32.039805   32944 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:27:32.042956   32944 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:27:32.045919   32944 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:27:32.045959   32944 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 18:27:32.045968   32944 cache.go:65] Caching tarball of preloaded images
	I1213 18:27:32.046005   32944 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:27:32.046053   32944 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 18:27:32.046062   32944 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 18:27:32.046457   32944 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/config.json ...
	I1213 18:27:32.046476   32944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/config.json: {Name:mk4fa1585aef59c067bbb1ec7f65d098fc9e2c0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:27:32.064870   32944 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 18:27:32.064880   32944 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 18:27:32.064902   32944 cache.go:243] Successfully downloaded all kic artifacts
	I1213 18:27:32.064931   32944 start.go:360] acquireMachinesLock for functional-752103: {Name:mkf4ec1d9e1836ef54983db4562aedfd1a9c51c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 18:27:32.065066   32944 start.go:364] duration metric: took 121.361µs to acquireMachinesLock for "functional-752103"
	I1213 18:27:32.065112   32944 start.go:93] Provisioning new machine with config: &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 18:27:32.065175   32944 start.go:125] createHost starting for "" (driver="docker")
	I1213 18:27:32.070281   32944 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1213 18:27:32.070558   32944 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:42681 to docker env.
	I1213 18:27:32.070584   32944 start.go:159] libmachine.API.Create for "functional-752103" (driver="docker")
	I1213 18:27:32.070606   32944 client.go:173] LocalClient.Create starting
	I1213 18:27:32.070695   32944 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem
	I1213 18:27:32.070730   32944 main.go:143] libmachine: Decoding PEM data...
	I1213 18:27:32.070744   32944 main.go:143] libmachine: Parsing certificate...
	I1213 18:27:32.070794   32944 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem
	I1213 18:27:32.070810   32944 main.go:143] libmachine: Decoding PEM data...
	I1213 18:27:32.070822   32944 main.go:143] libmachine: Parsing certificate...
	I1213 18:27:32.071177   32944 cli_runner.go:164] Run: docker network inspect functional-752103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 18:27:32.087413   32944 cli_runner.go:211] docker network inspect functional-752103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 18:27:32.087487   32944 network_create.go:284] running [docker network inspect functional-752103] to gather additional debugging logs...
	I1213 18:27:32.087510   32944 cli_runner.go:164] Run: docker network inspect functional-752103
	W1213 18:27:32.103188   32944 cli_runner.go:211] docker network inspect functional-752103 returned with exit code 1
	I1213 18:27:32.103207   32944 network_create.go:287] error running [docker network inspect functional-752103]: docker network inspect functional-752103: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-752103 not found
	I1213 18:27:32.103237   32944 network_create.go:289] output of [docker network inspect functional-752103]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-752103 not found
	
	** /stderr **
	I1213 18:27:32.103329   32944 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:27:32.119643   32944 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001928460}
	I1213 18:27:32.119674   32944 network_create.go:124] attempt to create docker network functional-752103 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 18:27:32.119744   32944 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-752103 functional-752103
	I1213 18:27:32.173200   32944 network_create.go:108] docker network functional-752103 192.168.49.0/24 created
	I1213 18:27:32.173220   32944 kic.go:121] calculated static IP "192.168.49.2" for the "functional-752103" container
	I1213 18:27:32.173308   32944 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 18:27:32.189321   32944 cli_runner.go:164] Run: docker volume create functional-752103 --label name.minikube.sigs.k8s.io=functional-752103 --label created_by.minikube.sigs.k8s.io=true
	I1213 18:27:32.207607   32944 oci.go:103] Successfully created a docker volume functional-752103
	I1213 18:27:32.207695   32944 cli_runner.go:164] Run: docker run --rm --name functional-752103-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-752103 --entrypoint /usr/bin/test -v functional-752103:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 18:27:32.722675   32944 oci.go:107] Successfully prepared a docker volume functional-752103
	I1213 18:27:32.722744   32944 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:27:32.722753   32944 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 18:27:32.722818   32944 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-752103:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 18:27:36.792367   32944 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-752103:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.069515433s)
	I1213 18:27:36.792388   32944 kic.go:203] duration metric: took 4.06963265s to extract preloaded images to volume ...
	W1213 18:27:36.792539   32944 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 18:27:36.792635   32944 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 18:27:36.855062   32944 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-752103 --name functional-752103 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-752103 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-752103 --network functional-752103 --ip 192.168.49.2 --volume functional-752103:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 18:27:37.165770   32944 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Running}}
	I1213 18:27:37.186877   32944 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:27:37.211542   32944 cli_runner.go:164] Run: docker exec functional-752103 stat /var/lib/dpkg/alternatives/iptables
	I1213 18:27:37.262379   32944 oci.go:144] the created container "functional-752103" has a running status.
	I1213 18:27:37.262397   32944 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa...
	I1213 18:27:37.325412   32944 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 18:27:37.352782   32944 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:27:37.384945   32944 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 18:27:37.384957   32944 kic_runner.go:114] Args: [docker exec --privileged functional-752103 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 18:27:37.434798   32944 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:27:37.456092   32944 machine.go:94] provisionDockerMachine start ...
	I1213 18:27:37.456187   32944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:27:37.480350   32944 main.go:143] libmachine: Using SSH client type: native
	I1213 18:27:37.480665   32944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:27:37.480675   32944 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 18:27:37.481354   32944 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57118->127.0.0.1:32783: read: connection reset by peer
	I1213 18:27:40.628701   32944 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:27:40.628715   32944 ubuntu.go:182] provisioning hostname "functional-752103"
	I1213 18:27:40.628786   32944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:27:40.646838   32944 main.go:143] libmachine: Using SSH client type: native
	I1213 18:27:40.647163   32944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:27:40.647172   32944 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-752103 && echo "functional-752103" | sudo tee /etc/hostname
	I1213 18:27:40.808170   32944 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:27:40.808239   32944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:27:40.828554   32944 main.go:143] libmachine: Using SSH client type: native
	I1213 18:27:40.828899   32944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:27:40.828913   32944 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-752103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-752103/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-752103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 18:27:40.977472   32944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 18:27:40.977497   32944 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 18:27:40.977531   32944 ubuntu.go:190] setting up certificates
	I1213 18:27:40.977542   32944 provision.go:84] configureAuth start
	I1213 18:27:40.977620   32944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:27:40.995351   32944 provision.go:143] copyHostCerts
	I1213 18:27:40.995417   32944 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 18:27:40.995424   32944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:27:40.995502   32944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 18:27:40.995601   32944 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 18:27:40.995604   32944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:27:40.995629   32944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 18:27:40.995687   32944 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 18:27:40.995690   32944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:27:40.995712   32944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 18:27:40.995764   32944 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.functional-752103 san=[127.0.0.1 192.168.49.2 functional-752103 localhost minikube]
	I1213 18:27:41.464735   32944 provision.go:177] copyRemoteCerts
	I1213 18:27:41.464788   32944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 18:27:41.464835   32944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:27:41.481676   32944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:27:41.584630   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 18:27:41.601893   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 18:27:41.619593   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 18:27:41.638446   32944 provision.go:87] duration metric: took 660.882505ms to configureAuth
	I1213 18:27:41.638464   32944 ubuntu.go:206] setting minikube options for container-runtime
	I1213 18:27:41.638650   32944 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:27:41.638758   32944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:27:41.655744   32944 main.go:143] libmachine: Using SSH client type: native
	I1213 18:27:41.656049   32944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:27:41.656060   32944 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 18:27:41.955746   32944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 18:27:41.955759   32944 machine.go:97] duration metric: took 4.499656565s to provisionDockerMachine
	I1213 18:27:41.955767   32944 client.go:176] duration metric: took 9.885156707s to LocalClient.Create
	I1213 18:27:41.955785   32944 start.go:167] duration metric: took 9.885201342s to libmachine.API.Create "functional-752103"
	I1213 18:27:41.955791   32944 start.go:293] postStartSetup for "functional-752103" (driver="docker")
	I1213 18:27:41.955802   32944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 18:27:41.955876   32944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 18:27:41.955915   32944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:27:41.972806   32944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:27:42.082798   32944 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 18:27:42.087337   32944 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 18:27:42.087356   32944 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 18:27:42.087368   32944 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 18:27:42.087440   32944 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 18:27:42.087537   32944 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 18:27:42.087619   32944 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> hosts in /etc/test/nested/copy/4637
	I1213 18:27:42.087664   32944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4637
	I1213 18:27:42.097387   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:27:42.118710   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts --> /etc/test/nested/copy/4637/hosts (40 bytes)
	I1213 18:27:42.140947   32944 start.go:296] duration metric: took 185.141488ms for postStartSetup
	I1213 18:27:42.141385   32944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:27:42.166251   32944 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/config.json ...
	I1213 18:27:42.166574   32944 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 18:27:42.166616   32944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:27:42.186173   32944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:27:42.290343   32944 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 18:27:42.295401   32944 start.go:128] duration metric: took 10.230208659s to createHost
	I1213 18:27:42.295417   32944 start.go:83] releasing machines lock for "functional-752103", held for 10.230343624s
	I1213 18:27:42.295491   32944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:27:42.317366   32944 out.go:179] * Found network options:
	I1213 18:27:42.320414   32944 out.go:179]   - HTTP_PROXY=localhost:42681
	W1213 18:27:42.323403   32944 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1213 18:27:42.326420   32944 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1213 18:27:42.329455   32944 ssh_runner.go:195] Run: cat /version.json
	I1213 18:27:42.329487   32944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 18:27:42.329496   32944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:27:42.329538   32944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:27:42.356548   32944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:27:42.357522   32944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:27:42.456666   32944 ssh_runner.go:195] Run: systemctl --version
	I1213 18:27:42.550143   32944 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 18:27:42.584733   32944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 18:27:42.589099   32944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 18:27:42.589170   32944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 18:27:42.616636   32944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 18:27:42.616649   32944 start.go:496] detecting cgroup driver to use...
	I1213 18:27:42.616681   32944 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 18:27:42.616729   32944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 18:27:42.634644   32944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 18:27:42.648107   32944 docker.go:218] disabling cri-docker service (if available) ...
	I1213 18:27:42.648160   32944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 18:27:42.666305   32944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 18:27:42.684990   32944 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 18:27:42.809364   32944 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 18:27:42.934099   32944 docker.go:234] disabling docker service ...
	I1213 18:27:42.934152   32944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 18:27:42.954931   32944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 18:27:42.967649   32944 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 18:27:43.087719   32944 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 18:27:43.210770   32944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 18:27:43.224584   32944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 18:27:43.238516   32944 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 18:27:43.238572   32944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:27:43.247479   32944 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 18:27:43.247537   32944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:27:43.257443   32944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:27:43.266863   32944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:27:43.275799   32944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 18:27:43.284154   32944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:27:43.293947   32944 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:27:43.307875   32944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:27:43.317120   32944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 18:27:43.325171   32944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 18:27:43.332708   32944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:27:43.446325   32944 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 18:27:43.619417   32944 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 18:27:43.619486   32944 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 18:27:43.623666   32944 start.go:564] Will wait 60s for crictl version
	I1213 18:27:43.623729   32944 ssh_runner.go:195] Run: which crictl
	I1213 18:27:43.627150   32944 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 18:27:43.650592   32944 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 18:27:43.650681   32944 ssh_runner.go:195] Run: crio --version
	I1213 18:27:43.679276   32944 ssh_runner.go:195] Run: crio --version
	I1213 18:27:43.712325   32944 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 18:27:43.715181   32944 cli_runner.go:164] Run: docker network inspect functional-752103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:27:43.731686   32944 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 18:27:43.735372   32944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 18:27:43.744987   32944 kubeadm.go:884] updating cluster {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 18:27:43.745121   32944 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:27:43.745172   32944 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:27:43.789961   32944 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:27:43.789972   32944 crio.go:433] Images already preloaded, skipping extraction
	I1213 18:27:43.790026   32944 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:27:43.815606   32944 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:27:43.815617   32944 cache_images.go:86] Images are preloaded, skipping loading
	I1213 18:27:43.815623   32944 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 18:27:43.815728   32944 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-752103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 18:27:43.815806   32944 ssh_runner.go:195] Run: crio config
	I1213 18:27:43.884656   32944 cni.go:84] Creating CNI manager for ""
	I1213 18:27:43.884667   32944 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:27:43.884689   32944 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 18:27:43.884710   32944 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-752103 NodeName:functional-752103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 18:27:43.884864   32944 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-752103"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 18:27:43.884942   32944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 18:27:43.892752   32944 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 18:27:43.892816   32944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 18:27:43.900712   32944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 18:27:43.914253   32944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 18:27:43.927865   32944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 18:27:43.940920   32944 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 18:27:43.944586   32944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 18:27:43.955957   32944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:27:44.080662   32944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:27:44.101589   32944 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103 for IP: 192.168.49.2
	I1213 18:27:44.101613   32944 certs.go:195] generating shared ca certs ...
	I1213 18:27:44.101628   32944 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:27:44.101809   32944 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 18:27:44.101861   32944 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 18:27:44.101867   32944 certs.go:257] generating profile certs ...
	I1213 18:27:44.101969   32944 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key
	I1213 18:27:44.101989   32944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt with IP's: []
	I1213 18:27:44.584834   32944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt ...
	I1213 18:27:44.584851   32944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: {Name:mk66c878815cb1e95fa7c677d3ba88654b299f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:27:44.585065   32944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key ...
	I1213 18:27:44.585072   32944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key: {Name:mkdce1942c5889ba355bf422452d41e03aaed36b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:27:44.585165   32944 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key.597c6026
	I1213 18:27:44.585176   32944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt.597c6026 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 18:27:45.177431   32944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt.597c6026 ...
	I1213 18:27:45.177449   32944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt.597c6026: {Name:mk679bde13121266de360cc0ed5250e1cbe7774f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:27:45.177731   32944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key.597c6026 ...
	I1213 18:27:45.177745   32944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key.597c6026: {Name:mk27c241259a7f91dcf006cc01d2348359d03e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:27:45.178015   32944 certs.go:382] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt.597c6026 -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt
	I1213 18:27:45.178107   32944 certs.go:386] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key.597c6026 -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key
	I1213 18:27:45.178172   32944 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key
	I1213 18:27:45.178188   32944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt with IP's: []
	I1213 18:27:45.260026   32944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt ...
	I1213 18:27:45.260042   32944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt: {Name:mk798da98ad772310fdc93a272f4d3dd0f4aae87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:27:45.260391   32944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key ...
	I1213 18:27:45.260404   32944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key: {Name:mkcb181998664f380bf5e575aa0a0908bcd244a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:27:45.262452   32944 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 18:27:45.262559   32944 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 18:27:45.262569   32944 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 18:27:45.262599   32944 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 18:27:45.262647   32944 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 18:27:45.262684   32944 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 18:27:45.262763   32944 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:27:45.264114   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 18:27:45.308489   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 18:27:45.348223   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 18:27:45.370656   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 18:27:45.389765   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 18:27:45.409454   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 18:27:45.427980   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 18:27:45.447048   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 18:27:45.465170   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 18:27:45.483534   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 18:27:45.501541   32944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 18:27:45.519503   32944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 18:27:45.532514   32944 ssh_runner.go:195] Run: openssl version
	I1213 18:27:45.538803   32944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 18:27:45.546463   32944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 18:27:45.554265   32944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 18:27:45.558262   32944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:27:45.558317   32944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 18:27:45.603018   32944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 18:27:45.611024   32944 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4637.pem /etc/ssl/certs/51391683.0
	I1213 18:27:45.618788   32944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 18:27:45.626946   32944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 18:27:45.634769   32944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 18:27:45.638741   32944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:27:45.638800   32944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 18:27:45.681462   32944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 18:27:45.689055   32944 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/46372.pem /etc/ssl/certs/3ec20f2e.0
	I1213 18:27:45.696508   32944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:27:45.704189   32944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 18:27:45.712110   32944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:27:45.716074   32944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:27:45.716132   32944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:27:45.757214   32944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 18:27:45.764967   32944 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 18:27:45.772533   32944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:27:45.775984   32944 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 18:27:45.776035   32944 kubeadm.go:401] StartCluster: {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:27:45.776103   32944 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:27:45.776157   32944 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:27:45.802433   32944 cri.go:89] found id: ""
	I1213 18:27:45.802498   32944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 18:27:45.810246   32944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 18:27:45.818174   32944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 18:27:45.818229   32944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:27:45.826285   32944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 18:27:45.826294   32944 kubeadm.go:158] found existing configuration files:
	
	I1213 18:27:45.826348   32944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:27:45.834245   32944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 18:27:45.834301   32944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 18:27:45.841810   32944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:27:45.849589   32944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 18:27:45.849646   32944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:27:45.857376   32944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:27:45.865199   32944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 18:27:45.865265   32944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:27:45.872714   32944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:27:45.880428   32944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 18:27:45.880493   32944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:27:45.888543   32944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 18:27:45.925814   32944 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 18:27:45.925866   32944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 18:27:46.006544   32944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 18:27:46.011304   32944 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 18:27:46.011364   32944 kubeadm.go:319] OS: Linux
	I1213 18:27:46.011419   32944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 18:27:46.011476   32944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 18:27:46.011535   32944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 18:27:46.011593   32944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 18:27:46.011646   32944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 18:27:46.011702   32944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 18:27:46.011805   32944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 18:27:46.011866   32944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 18:27:46.011935   32944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 18:27:46.077339   32944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 18:27:46.077442   32944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 18:27:46.077540   32944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 18:27:46.089603   32944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 18:27:46.096106   32944 out.go:252]   - Generating certificates and keys ...
	I1213 18:27:46.096223   32944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 18:27:46.096297   32944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 18:27:46.494139   32944 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 18:27:46.670859   32944 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 18:27:46.830782   32944 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 18:27:47.257202   32944 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 18:27:47.590828   32944 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 18:27:47.591124   32944 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-752103 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 18:27:47.970898   32944 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 18:27:47.971194   32944 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-752103 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 18:27:48.123019   32944 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 18:27:48.365755   32944 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 18:27:48.438093   32944 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 18:27:48.438393   32944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 18:27:48.907793   32944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 18:27:49.174683   32944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 18:27:49.290193   32944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 18:27:49.845115   32944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 18:27:50.085180   32944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 18:27:50.086231   32944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 18:27:50.089527   32944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 18:27:50.093103   32944 out.go:252]   - Booting up control plane ...
	I1213 18:27:50.093207   32944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 18:27:50.093285   32944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 18:27:50.094512   32944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 18:27:50.119375   32944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 18:27:50.119476   32944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 18:27:50.127749   32944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 18:27:50.128001   32944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 18:27:50.128179   32944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 18:27:50.272936   32944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 18:27:50.273106   32944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 18:31:50.273487   32944 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000291647s
	I1213 18:31:50.273512   32944 kubeadm.go:319] 
	I1213 18:31:50.273610   32944 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 18:31:50.273809   32944 kubeadm.go:319] 	- The kubelet is not running
	I1213 18:31:50.273988   32944 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 18:31:50.273997   32944 kubeadm.go:319] 
	I1213 18:31:50.274177   32944 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 18:31:50.274464   32944 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 18:31:50.274516   32944 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 18:31:50.274520   32944 kubeadm.go:319] 
	I1213 18:31:50.282208   32944 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 18:31:50.282670   32944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 18:31:50.282788   32944 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 18:31:50.283024   32944 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 18:31:50.283029   32944 kubeadm.go:319] 
	I1213 18:31:50.283096   32944 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 18:31:50.283214   32944 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-752103 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-752103 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000291647s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 18:31:50.283306   32944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 18:31:50.691674   32944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 18:31:50.704471   32944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 18:31:50.704522   32944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:31:50.712194   32944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 18:31:50.712211   32944 kubeadm.go:158] found existing configuration files:
	
	I1213 18:31:50.712261   32944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:31:50.719944   32944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 18:31:50.719998   32944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 18:31:50.727474   32944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:31:50.735341   32944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 18:31:50.735397   32944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:31:50.742813   32944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:31:50.750853   32944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 18:31:50.750913   32944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:31:50.758485   32944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:31:50.766140   32944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 18:31:50.766196   32944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:31:50.773599   32944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 18:31:50.812391   32944 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 18:31:50.812458   32944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 18:31:50.892094   32944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 18:31:50.892167   32944 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 18:31:50.892235   32944 kubeadm.go:319] OS: Linux
	I1213 18:31:50.892284   32944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 18:31:50.892332   32944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 18:31:50.892407   32944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 18:31:50.892465   32944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 18:31:50.892521   32944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 18:31:50.892575   32944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 18:31:50.892620   32944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 18:31:50.892673   32944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 18:31:50.892754   32944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 18:31:50.960260   32944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 18:31:50.960371   32944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 18:31:50.960471   32944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 18:31:50.973455   32944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 18:31:50.978684   32944 out.go:252]   - Generating certificates and keys ...
	I1213 18:31:50.978796   32944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 18:31:50.978867   32944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 18:31:50.978948   32944 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 18:31:50.979012   32944 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 18:31:50.979090   32944 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 18:31:50.979146   32944 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 18:31:50.979212   32944 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 18:31:50.979277   32944 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 18:31:50.979355   32944 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 18:31:50.979431   32944 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 18:31:50.979479   32944 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 18:31:50.979539   32944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 18:31:51.301194   32944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 18:31:51.453900   32944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 18:31:51.492383   32944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 18:31:51.628605   32944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 18:31:51.868009   32944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 18:31:51.868580   32944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 18:31:51.871249   32944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 18:31:51.874402   32944 out.go:252]   - Booting up control plane ...
	I1213 18:31:51.874503   32944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 18:31:51.874613   32944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 18:31:51.874685   32944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 18:31:51.888777   32944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 18:31:51.888877   32944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 18:31:51.900438   32944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 18:31:51.900558   32944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 18:31:51.900601   32944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 18:31:52.033143   32944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 18:31:52.033290   32944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 18:35:52.032321   32944 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001290098s
	I1213 18:35:52.032350   32944 kubeadm.go:319] 
	I1213 18:35:52.032633   32944 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 18:35:52.032691   32944 kubeadm.go:319] 	- The kubelet is not running
	I1213 18:35:52.032890   32944 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 18:35:52.032896   32944 kubeadm.go:319] 
	I1213 18:35:52.033952   32944 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 18:35:52.034015   32944 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 18:35:52.034075   32944 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 18:35:52.034080   32944 kubeadm.go:319] 
	I1213 18:35:52.038391   32944 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 18:35:52.038865   32944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 18:35:52.038975   32944 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 18:35:52.039221   32944 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 18:35:52.039225   32944 kubeadm.go:319] 
	I1213 18:35:52.039346   32944 kubeadm.go:403] duration metric: took 8m6.263314954s to StartCluster
	I1213 18:35:52.039378   32944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:35:52.039444   32944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:35:52.039531   32944 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 18:35:52.066919   32944 cri.go:89] found id: ""
	I1213 18:35:52.066934   32944 logs.go:282] 0 containers: []
	W1213 18:35:52.066941   32944 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:35:52.066946   32944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:35:52.067008   32944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:35:52.093470   32944 cri.go:89] found id: ""
	I1213 18:35:52.093483   32944 logs.go:282] 0 containers: []
	W1213 18:35:52.093490   32944 logs.go:284] No container was found matching "etcd"
	I1213 18:35:52.093495   32944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:35:52.093555   32944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:35:52.119469   32944 cri.go:89] found id: ""
	I1213 18:35:52.119483   32944 logs.go:282] 0 containers: []
	W1213 18:35:52.119490   32944 logs.go:284] No container was found matching "coredns"
	I1213 18:35:52.119495   32944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:35:52.119551   32944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:35:52.145665   32944 cri.go:89] found id: ""
	I1213 18:35:52.145679   32944 logs.go:282] 0 containers: []
	W1213 18:35:52.145697   32944 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:35:52.145702   32944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:35:52.145758   32944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:35:52.171197   32944 cri.go:89] found id: ""
	I1213 18:35:52.171210   32944 logs.go:282] 0 containers: []
	W1213 18:35:52.171218   32944 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:35:52.171223   32944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:35:52.171293   32944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:35:52.196429   32944 cri.go:89] found id: ""
	I1213 18:35:52.196443   32944 logs.go:282] 0 containers: []
	W1213 18:35:52.196451   32944 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:35:52.196456   32944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:35:52.196512   32944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:35:52.223659   32944 cri.go:89] found id: ""
	I1213 18:35:52.223681   32944 logs.go:282] 0 containers: []
	W1213 18:35:52.223688   32944 logs.go:284] No container was found matching "kindnet"
	I1213 18:35:52.223696   32944 logs.go:123] Gathering logs for kubelet ...
	I1213 18:35:52.223706   32944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:35:52.292827   32944 logs.go:123] Gathering logs for dmesg ...
	I1213 18:35:52.292845   32944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:35:52.303502   32944 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:35:52.303518   32944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:35:52.396055   32944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:35:52.387194    4868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:35:52.388527    4868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:35:52.388990    4868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:35:52.390586    4868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:35:52.391049    4868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:35:52.387194    4868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:35:52.388527    4868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:35:52.388990    4868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:35:52.390586    4868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:35:52.391049    4868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:35:52.396065   32944 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:35:52.396075   32944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:35:52.428123   32944 logs.go:123] Gathering logs for container status ...
	I1213 18:35:52.428141   32944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 18:35:52.456511   32944 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001290098s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 18:35:52.456560   32944 out.go:285] * 
	W1213 18:35:52.456619   32944 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001290098s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 18:35:52.456634   32944 out.go:285] * 
	W1213 18:35:52.458758   32944 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:35:52.463574   32944 out.go:203] 
	W1213 18:35:52.466498   32944 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001290098s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 18:35:52.466549   32944 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 18:35:52.466569   32944 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 18:35:52.469733   32944 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 18:27:43 functional-752103 crio[842]: time="2025-12-13T18:27:43.613911775Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 18:27:43 functional-752103 crio[842]: time="2025-12-13T18:27:43.613944841Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 18:27:43 functional-752103 crio[842]: time="2025-12-13T18:27:43.613981616Z" level=info msg="Create NRI interface"
	Dec 13 18:27:43 functional-752103 crio[842]: time="2025-12-13T18:27:43.614070994Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 18:27:43 functional-752103 crio[842]: time="2025-12-13T18:27:43.61407837Z" level=info msg="runtime interface created"
	Dec 13 18:27:43 functional-752103 crio[842]: time="2025-12-13T18:27:43.614088651Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 18:27:43 functional-752103 crio[842]: time="2025-12-13T18:27:43.614095354Z" level=info msg="runtime interface starting up..."
	Dec 13 18:27:43 functional-752103 crio[842]: time="2025-12-13T18:27:43.614101123Z" level=info msg="starting plugins..."
	Dec 13 18:27:43 functional-752103 crio[842]: time="2025-12-13T18:27:43.614112396Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 18:27:43 functional-752103 crio[842]: time="2025-12-13T18:27:43.61417697Z" level=info msg="No systemd watchdog enabled"
	Dec 13 18:27:43 functional-752103 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 18:27:46 functional-752103 crio[842]: time="2025-12-13T18:27:46.080833529Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=a1935052-e79a-4a5c-bba6-afa4d69263ef name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:27:46 functional-752103 crio[842]: time="2025-12-13T18:27:46.081866216Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=3af5da33-52f0-4f21-96de-a8fa438453f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:27:46 functional-752103 crio[842]: time="2025-12-13T18:27:46.082505849Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=34ce55de-fb25-4a45-9a9a-2b812f92d70e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:27:46 functional-752103 crio[842]: time="2025-12-13T18:27:46.083063923Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7c1b5b64-4d53-46be-aff6-fb4066abb3b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:27:46 functional-752103 crio[842]: time="2025-12-13T18:27:46.08363785Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=11ed5b24-a43f-41ff-8c53-d3e0721de1fd name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:27:46 functional-752103 crio[842]: time="2025-12-13T18:27:46.084178858Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7847fa57-d001-4bac-ab73-82165c47221f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:27:46 functional-752103 crio[842]: time="2025-12-13T18:27:46.08473562Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=3e620be4-862b-49f5-bce5-42bdeb1dc75d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:31:50 functional-752103 crio[842]: time="2025-12-13T18:31:50.963705234Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=08e8e9c2-9c2f-480a-8c3f-bda0b6dd99a6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:31:50 functional-752103 crio[842]: time="2025-12-13T18:31:50.964564777Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=be575f99-3657-4837-bc27-c4a04955a20a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:31:50 functional-752103 crio[842]: time="2025-12-13T18:31:50.965258395Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=89b83f94-205e-499f-86e7-0fe8681dbc02 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:31:50 functional-752103 crio[842]: time="2025-12-13T18:31:50.965789266Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=ccd0bc2c-7090-4d1a-bb72-ebf701b8acd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:31:50 functional-752103 crio[842]: time="2025-12-13T18:31:50.966246503Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=56f23738-97f6-4dfd-bcb7-926cafd27e2e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:31:50 functional-752103 crio[842]: time="2025-12-13T18:31:50.96665954Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d20dab77-b398-45d6-8024-a02485f916bf name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:31:50 functional-752103 crio[842]: time="2025-12-13T18:31:50.96708093Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=cf516346-ad81-4692-9cd7-06fdb1c5e63a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:35:53.439452    4995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:35:53.440149    4995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:35:53.441867    4995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:35:53.443088    4995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:35:53.445631    4995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 18:35:53 up  1:18,  0 user,  load average: 0.23, 0.37, 0.52
	Linux functional-752103 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 18:35:50 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:35:51 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 638.
	Dec 13 18:35:51 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:35:51 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:35:51 functional-752103 kubelet[4798]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:35:51 functional-752103 kubelet[4798]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:35:51 functional-752103 kubelet[4798]: E1213 18:35:51.623258    4798 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:35:51 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:35:51 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:35:52 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 639.
	Dec 13 18:35:52 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:35:52 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:35:52 functional-752103 kubelet[4872]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:35:52 functional-752103 kubelet[4872]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:35:52 functional-752103 kubelet[4872]: E1213 18:35:52.380456    4872 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:35:52 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:35:52 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:35:53 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 640.
	Dec 13 18:35:53 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:35:53 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:35:53 functional-752103 kubelet[4914]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:35:53 functional-752103 kubelet[4914]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:35:53 functional-752103 kubelet[4914]: E1213 18:35:53.135732    4914 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:35:53 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:35:53 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103: exit status 6 (374.565719ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 18:35:53.939610   38753 status.go:458] kubeconfig endpoint: get endpoint: "functional-752103" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-752103" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 18:35:53.956233    4637 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-752103 --alsologtostderr -v=8
E1213 18:36:42.459152    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:37:10.169566    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:39:44.920652    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:41:08.001476    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:41:42.459942    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-752103 --alsologtostderr -v=8: exit status 80 (6m5.144764404s)

                                                
                                                
-- stdout --
	* [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-752103" primary control-plane node in "functional-752103" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:35:53.999245   38829 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:35:53.999434   38829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:35:53.999464   38829 out.go:374] Setting ErrFile to fd 2...
	I1213 18:35:53.999486   38829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:35:53.999778   38829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:35:54.000250   38829 out.go:368] Setting JSON to false
	I1213 18:35:54.001308   38829 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4706,"bootTime":1765646248,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:35:54.001457   38829 start.go:143] virtualization:  
	I1213 18:35:54.010388   38829 out.go:179] * [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:35:54.014157   38829 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:35:54.014353   38829 notify.go:221] Checking for updates...
	I1213 18:35:54.020075   38829 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:35:54.023186   38829 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:54.026171   38829 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:35:54.029213   38829 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:35:54.032235   38829 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:35:54.035744   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:54.035909   38829 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:35:54.059624   38829 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:35:54.059744   38829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:35:54.127464   38829 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:35:54.118134446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:35:54.127571   38829 docker.go:319] overlay module found
	I1213 18:35:54.130605   38829 out.go:179] * Using the docker driver based on existing profile
	I1213 18:35:54.133521   38829 start.go:309] selected driver: docker
	I1213 18:35:54.133548   38829 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:54.133668   38829 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:35:54.133779   38829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:35:54.194306   38829 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:35:54.184244205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:35:54.194716   38829 cni.go:84] Creating CNI manager for ""
	I1213 18:35:54.194772   38829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:35:54.194827   38829 start.go:353] cluster config:
	{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:54.197953   38829 out.go:179] * Starting "functional-752103" primary control-plane node in "functional-752103" cluster
	I1213 18:35:54.200965   38829 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:35:54.203964   38829 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:35:54.207111   38829 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:35:54.207169   38829 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 18:35:54.207189   38829 cache.go:65] Caching tarball of preloaded images
	I1213 18:35:54.207200   38829 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:35:54.207268   38829 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 18:35:54.207278   38829 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 18:35:54.207380   38829 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/config.json ...
	I1213 18:35:54.226684   38829 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 18:35:54.226707   38829 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 18:35:54.226736   38829 cache.go:243] Successfully downloaded all kic artifacts
	I1213 18:35:54.226765   38829 start.go:360] acquireMachinesLock for functional-752103: {Name:mkf4ec1d9e1836ef54983db4562aedfd1a9c51c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 18:35:54.226834   38829 start.go:364] duration metric: took 45.136µs to acquireMachinesLock for "functional-752103"
	I1213 18:35:54.226856   38829 start.go:96] Skipping create...Using existing machine configuration
	I1213 18:35:54.226865   38829 fix.go:54] fixHost starting: 
	I1213 18:35:54.227126   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:54.245088   38829 fix.go:112] recreateIfNeeded on functional-752103: state=Running err=<nil>
	W1213 18:35:54.245125   38829 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 18:35:54.248193   38829 out.go:252] * Updating the running docker "functional-752103" container ...
	I1213 18:35:54.248225   38829 machine.go:94] provisionDockerMachine start ...
	I1213 18:35:54.248302   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.265418   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.265750   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.265765   38829 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 18:35:54.412628   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:35:54.412654   38829 ubuntu.go:182] provisioning hostname "functional-752103"
	I1213 18:35:54.412716   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.431532   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.431834   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.431851   38829 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-752103 && echo "functional-752103" | sudo tee /etc/hostname
	I1213 18:35:54.592050   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:35:54.592214   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.614592   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.614908   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.614930   38829 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-752103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-752103/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-752103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 18:35:54.769516   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 18:35:54.769546   38829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 18:35:54.769572   38829 ubuntu.go:190] setting up certificates
	I1213 18:35:54.769581   38829 provision.go:84] configureAuth start
	I1213 18:35:54.769640   38829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:35:54.787462   38829 provision.go:143] copyHostCerts
	I1213 18:35:54.787509   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:35:54.787551   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 18:35:54.787563   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:35:54.787650   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 18:35:54.787740   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:35:54.787760   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 18:35:54.787765   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:35:54.787800   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 18:35:54.787845   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:35:54.787868   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 18:35:54.787877   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:35:54.787902   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 18:35:54.787955   38829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.functional-752103 san=[127.0.0.1 192.168.49.2 functional-752103 localhost minikube]
	I1213 18:35:54.878725   38829 provision.go:177] copyRemoteCerts
	I1213 18:35:54.878794   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 18:35:54.878839   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.895961   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.009601   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 18:35:55.009696   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 18:35:55.033852   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 18:35:55.033923   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 18:35:55.052749   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 18:35:55.052813   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 18:35:55.072069   38829 provision.go:87] duration metric: took 302.464055ms to configureAuth
	I1213 18:35:55.072107   38829 ubuntu.go:206] setting minikube options for container-runtime
	I1213 18:35:55.072313   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:55.072426   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.092406   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:55.092745   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:55.092771   38829 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 18:35:55.413226   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 18:35:55.413251   38829 machine.go:97] duration metric: took 1.16501875s to provisionDockerMachine
	I1213 18:35:55.413264   38829 start.go:293] postStartSetup for "functional-752103" (driver="docker")
	I1213 18:35:55.413300   38829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 18:35:55.413403   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 18:35:55.413470   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.430709   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.537093   38829 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 18:35:55.540324   38829 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 18:35:55.540345   38829 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 18:35:55.540349   38829 command_runner.go:130] > VERSION_ID="12"
	I1213 18:35:55.540354   38829 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 18:35:55.540359   38829 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 18:35:55.540363   38829 command_runner.go:130] > ID=debian
	I1213 18:35:55.540368   38829 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 18:35:55.540373   38829 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 18:35:55.540379   38829 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 18:35:55.540743   38829 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 18:35:55.540767   38829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 18:35:55.540779   38829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 18:35:55.540839   38829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 18:35:55.540926   38829 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 18:35:55.540938   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 18:35:55.541035   38829 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> hosts in /etc/test/nested/copy/4637
	I1213 18:35:55.541044   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> /etc/test/nested/copy/4637/hosts
	I1213 18:35:55.541087   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4637
	I1213 18:35:55.548955   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:35:55.566460   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts --> /etc/test/nested/copy/4637/hosts (40 bytes)
	I1213 18:35:55.584163   38829 start.go:296] duration metric: took 170.869499ms for postStartSetup
	I1213 18:35:55.584240   38829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 18:35:55.584294   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.601966   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.706486   38829 command_runner.go:130] > 11%
	I1213 18:35:55.706569   38829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 18:35:55.711597   38829 command_runner.go:130] > 174G
	I1213 18:35:55.711643   38829 fix.go:56] duration metric: took 1.484775946s for fixHost
	I1213 18:35:55.711654   38829 start.go:83] releasing machines lock for "functional-752103", held for 1.484809349s
	I1213 18:35:55.711733   38829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:35:55.731505   38829 ssh_runner.go:195] Run: cat /version.json
	I1213 18:35:55.731524   38829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 18:35:55.731557   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.731578   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.752781   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.757282   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.945606   38829 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 18:35:55.945674   38829 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 18:35:55.945816   38829 ssh_runner.go:195] Run: systemctl --version
	I1213 18:35:55.951961   38829 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 18:35:55.951999   38829 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 18:35:55.952322   38829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 18:35:55.992229   38829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 18:35:56.001527   38829 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 18:35:56.001762   38829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 18:35:56.001849   38829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 18:35:56.014010   38829 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 18:35:56.014037   38829 start.go:496] detecting cgroup driver to use...
	I1213 18:35:56.014094   38829 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 18:35:56.014182   38829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 18:35:56.030879   38829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 18:35:56.046797   38829 docker.go:218] disabling cri-docker service (if available) ...
	I1213 18:35:56.046882   38829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 18:35:56.067384   38829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 18:35:56.080815   38829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 18:35:56.192099   38829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 18:35:56.317541   38829 docker.go:234] disabling docker service ...
	I1213 18:35:56.317693   38829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 18:35:56.332696   38829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 18:35:56.345912   38829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 18:35:56.463560   38829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 18:35:56.579100   38829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 18:35:56.592582   38829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 18:35:56.605285   38829 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 18:35:56.606432   38829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 18:35:56.606495   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.615251   38829 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 18:35:56.615329   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.624699   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.633587   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.642744   38829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 18:35:56.651128   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.660108   38829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.669661   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.678839   38829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 18:35:56.685773   38829 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 18:35:56.686744   38829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 18:35:56.694432   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:56.830483   38829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 18:35:57.005048   38829 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 18:35:57.005450   38829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 18:35:57.010285   38829 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 18:35:57.010309   38829 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 18:35:57.010316   38829 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1213 18:35:57.010333   38829 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 18:35:57.010338   38829 command_runner.go:130] > Access: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010348   38829 command_runner.go:130] > Modify: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010355   38829 command_runner.go:130] > Change: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010364   38829 command_runner.go:130] >  Birth: -
	I1213 18:35:57.010406   38829 start.go:564] Will wait 60s for crictl version
	I1213 18:35:57.010459   38829 ssh_runner.go:195] Run: which crictl
	I1213 18:35:57.014231   38829 command_runner.go:130] > /usr/local/bin/crictl
	I1213 18:35:57.014339   38829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 18:35:57.039763   38829 command_runner.go:130] > Version:  0.1.0
	I1213 18:35:57.039785   38829 command_runner.go:130] > RuntimeName:  cri-o
	I1213 18:35:57.039789   38829 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 18:35:57.039795   38829 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 18:35:57.039807   38829 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 18:35:57.039886   38829 ssh_runner.go:195] Run: crio --version
	I1213 18:35:57.067200   38829 command_runner.go:130] > crio version 1.34.3
	I1213 18:35:57.067289   38829 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 18:35:57.067311   38829 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 18:35:57.067352   38829 command_runner.go:130] >    GitTreeState:   dirty
	I1213 18:35:57.067376   38829 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 18:35:57.067397   38829 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 18:35:57.067430   38829 command_runner.go:130] >    Compiler:       gc
	I1213 18:35:57.067455   38829 command_runner.go:130] >    Platform:       linux/arm64
	I1213 18:35:57.067476   38829 command_runner.go:130] >    Linkmode:       static
	I1213 18:35:57.067513   38829 command_runner.go:130] >    BuildTags:
	I1213 18:35:57.067537   38829 command_runner.go:130] >      static
	I1213 18:35:57.067557   38829 command_runner.go:130] >      netgo
	I1213 18:35:57.067592   38829 command_runner.go:130] >      osusergo
	I1213 18:35:57.067614   38829 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 18:35:57.067632   38829 command_runner.go:130] >      seccomp
	I1213 18:35:57.067651   38829 command_runner.go:130] >      apparmor
	I1213 18:35:57.067685   38829 command_runner.go:130] >      selinux
	I1213 18:35:57.067706   38829 command_runner.go:130] >    LDFlags:          unknown
	I1213 18:35:57.067726   38829 command_runner.go:130] >    SeccompEnabled:   true
	I1213 18:35:57.067760   38829 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 18:35:57.069374   38829 ssh_runner.go:195] Run: crio --version
	I1213 18:35:57.097856   38829 command_runner.go:130] > crio version 1.34.3
	I1213 18:35:57.097937   38829 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 18:35:57.097971   38829 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 18:35:57.098005   38829 command_runner.go:130] >    GitTreeState:   dirty
	I1213 18:35:57.098025   38829 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 18:35:57.098058   38829 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 18:35:57.098082   38829 command_runner.go:130] >    Compiler:       gc
	I1213 18:35:57.098103   38829 command_runner.go:130] >    Platform:       linux/arm64
	I1213 18:35:57.098156   38829 command_runner.go:130] >    Linkmode:       static
	I1213 18:35:57.098180   38829 command_runner.go:130] >    BuildTags:
	I1213 18:35:57.098200   38829 command_runner.go:130] >      static
	I1213 18:35:57.098234   38829 command_runner.go:130] >      netgo
	I1213 18:35:57.098253   38829 command_runner.go:130] >      osusergo
	I1213 18:35:57.098277   38829 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 18:35:57.098306   38829 command_runner.go:130] >      seccomp
	I1213 18:35:57.098328   38829 command_runner.go:130] >      apparmor
	I1213 18:35:57.098348   38829 command_runner.go:130] >      selinux
	I1213 18:35:57.098384   38829 command_runner.go:130] >    LDFlags:          unknown
	I1213 18:35:57.098407   38829 command_runner.go:130] >    SeccompEnabled:   true
	I1213 18:35:57.098425   38829 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 18:35:57.103998   38829 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 18:35:57.106795   38829 cli_runner.go:164] Run: docker network inspect functional-752103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:35:57.122531   38829 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 18:35:57.126557   38829 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 18:35:57.126659   38829 kubeadm.go:884] updating cluster {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 18:35:57.126789   38829 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:35:57.126855   38829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:35:57.159258   38829 command_runner.go:130] > {
	I1213 18:35:57.159281   38829 command_runner.go:130] >   "images":  [
	I1213 18:35:57.159286   38829 command_runner.go:130] >     {
	I1213 18:35:57.159295   38829 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 18:35:57.159299   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159305   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 18:35:57.159309   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159312   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159321   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 18:35:57.159333   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 18:35:57.159349   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159354   38829 command_runner.go:130] >       "size":  "111333938",
	I1213 18:35:57.159358   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159370   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159373   38829 command_runner.go:130] >     },
	I1213 18:35:57.159376   38829 command_runner.go:130] >     {
	I1213 18:35:57.159382   38829 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 18:35:57.159389   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159394   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 18:35:57.159398   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159402   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159410   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 18:35:57.159421   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 18:35:57.159425   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159429   38829 command_runner.go:130] >       "size":  "29037500",
	I1213 18:35:57.159435   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159443   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159450   38829 command_runner.go:130] >     },
	I1213 18:35:57.159453   38829 command_runner.go:130] >     {
	I1213 18:35:57.159459   38829 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 18:35:57.159466   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159471   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 18:35:57.159474   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159481   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159489   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 18:35:57.159500   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 18:35:57.159504   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159508   38829 command_runner.go:130] >       "size":  "74491780",
	I1213 18:35:57.159514   38829 command_runner.go:130] >       "username":  "nonroot",
	I1213 18:35:57.159519   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159526   38829 command_runner.go:130] >     },
	I1213 18:35:57.159529   38829 command_runner.go:130] >     {
	I1213 18:35:57.159536   38829 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 18:35:57.159548   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159554   38829 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 18:35:57.159560   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159564   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159572   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 18:35:57.159582   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 18:35:57.159586   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159596   38829 command_runner.go:130] >       "size":  "60857170",
	I1213 18:35:57.159600   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159604   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159607   38829 command_runner.go:130] >       },
	I1213 18:35:57.159618   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159626   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159629   38829 command_runner.go:130] >     },
	I1213 18:35:57.159633   38829 command_runner.go:130] >     {
	I1213 18:35:57.159646   38829 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 18:35:57.159650   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159655   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 18:35:57.159661   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159665   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159673   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 18:35:57.159684   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 18:35:57.159687   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159691   38829 command_runner.go:130] >       "size":  "84949999",
	I1213 18:35:57.159697   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159701   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159706   38829 command_runner.go:130] >       },
	I1213 18:35:57.159710   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159720   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159723   38829 command_runner.go:130] >     },
	I1213 18:35:57.159726   38829 command_runner.go:130] >     {
	I1213 18:35:57.159733   38829 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 18:35:57.159740   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159750   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 18:35:57.159756   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159762   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159771   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 18:35:57.159782   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 18:35:57.159786   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159790   38829 command_runner.go:130] >       "size":  "72170325",
	I1213 18:35:57.159794   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159800   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159804   38829 command_runner.go:130] >       },
	I1213 18:35:57.159810   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159814   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159820   38829 command_runner.go:130] >     },
	I1213 18:35:57.159823   38829 command_runner.go:130] >     {
	I1213 18:35:57.159829   38829 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 18:35:57.159836   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159841   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 18:35:57.159847   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159851   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159859   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 18:35:57.159870   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 18:35:57.159874   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159878   38829 command_runner.go:130] >       "size":  "74106775",
	I1213 18:35:57.159882   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159888   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159892   38829 command_runner.go:130] >     },
	I1213 18:35:57.159897   38829 command_runner.go:130] >     {
	I1213 18:35:57.159904   38829 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 18:35:57.159910   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159916   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 18:35:57.159926   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159934   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159942   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 18:35:57.159966   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 18:35:57.159973   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159977   38829 command_runner.go:130] >       "size":  "49822549",
	I1213 18:35:57.159981   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159985   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159991   38829 command_runner.go:130] >       },
	I1213 18:35:57.159995   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.160003   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.160008   38829 command_runner.go:130] >     },
	I1213 18:35:57.160011   38829 command_runner.go:130] >     {
	I1213 18:35:57.160017   38829 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 18:35:57.160025   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.160030   38829 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.160033   38829 command_runner.go:130] >       ],
	I1213 18:35:57.160040   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.160048   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 18:35:57.160059   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 18:35:57.160063   38829 command_runner.go:130] >       ],
	I1213 18:35:57.160067   38829 command_runner.go:130] >       "size":  "519884",
	I1213 18:35:57.160070   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.160077   38829 command_runner.go:130] >         "value":  "65535"
	I1213 18:35:57.160080   38829 command_runner.go:130] >       },
	I1213 18:35:57.160084   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.160093   38829 command_runner.go:130] >       "pinned":  true
	I1213 18:35:57.160096   38829 command_runner.go:130] >     }
	I1213 18:35:57.160101   38829 command_runner.go:130] >   ]
	I1213 18:35:57.160112   38829 command_runner.go:130] > }
	I1213 18:35:57.162388   38829 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:35:57.162414   38829 crio.go:433] Images already preloaded, skipping extraction
	I1213 18:35:57.162470   38829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:35:57.186777   38829 command_runner.go:130] > {
	I1213 18:35:57.186796   38829 command_runner.go:130] >   "images":  [
	I1213 18:35:57.186801   38829 command_runner.go:130] >     {
	I1213 18:35:57.186817   38829 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 18:35:57.186822   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186828   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 18:35:57.186832   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186836   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186846   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 18:35:57.186854   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 18:35:57.186857   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186861   38829 command_runner.go:130] >       "size":  "111333938",
	I1213 18:35:57.186865   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.186873   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.186877   38829 command_runner.go:130] >     },
	I1213 18:35:57.186880   38829 command_runner.go:130] >     {
	I1213 18:35:57.186886   38829 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 18:35:57.186890   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186895   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 18:35:57.186898   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186902   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186913   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 18:35:57.186921   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 18:35:57.186928   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186933   38829 command_runner.go:130] >       "size":  "29037500",
	I1213 18:35:57.186936   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.186942   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.186945   38829 command_runner.go:130] >     },
	I1213 18:35:57.186948   38829 command_runner.go:130] >     {
	I1213 18:35:57.186954   38829 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 18:35:57.186958   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186963   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 18:35:57.186966   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186970   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186977   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 18:35:57.186985   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 18:35:57.186992   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186996   38829 command_runner.go:130] >       "size":  "74491780",
	I1213 18:35:57.187000   38829 command_runner.go:130] >       "username":  "nonroot",
	I1213 18:35:57.187004   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187007   38829 command_runner.go:130] >     },
	I1213 18:35:57.187009   38829 command_runner.go:130] >     {
	I1213 18:35:57.187016   38829 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 18:35:57.187020   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187024   38829 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 18:35:57.187029   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187033   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187041   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 18:35:57.187050   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 18:35:57.187053   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187057   38829 command_runner.go:130] >       "size":  "60857170",
	I1213 18:35:57.187061   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187064   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187067   38829 command_runner.go:130] >       },
	I1213 18:35:57.187075   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187079   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187082   38829 command_runner.go:130] >     },
	I1213 18:35:57.187085   38829 command_runner.go:130] >     {
	I1213 18:35:57.187092   38829 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 18:35:57.187095   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187101   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 18:35:57.187104   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187108   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187115   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 18:35:57.187123   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 18:35:57.187126   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187130   38829 command_runner.go:130] >       "size":  "84949999",
	I1213 18:35:57.187134   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187137   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187146   38829 command_runner.go:130] >       },
	I1213 18:35:57.187149   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187153   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187157   38829 command_runner.go:130] >     },
	I1213 18:35:57.187159   38829 command_runner.go:130] >     {
	I1213 18:35:57.187166   38829 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 18:35:57.187170   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187175   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 18:35:57.187178   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187182   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187190   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 18:35:57.187199   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 18:35:57.187202   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187206   38829 command_runner.go:130] >       "size":  "72170325",
	I1213 18:35:57.187209   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187213   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187216   38829 command_runner.go:130] >       },
	I1213 18:35:57.187219   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187223   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187226   38829 command_runner.go:130] >     },
	I1213 18:35:57.187229   38829 command_runner.go:130] >     {
	I1213 18:35:57.187236   38829 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 18:35:57.187239   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187244   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 18:35:57.187247   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187251   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187258   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 18:35:57.187266   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 18:35:57.187269   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187273   38829 command_runner.go:130] >       "size":  "74106775",
	I1213 18:35:57.187277   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187280   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187283   38829 command_runner.go:130] >     },
	I1213 18:35:57.187291   38829 command_runner.go:130] >     {
	I1213 18:35:57.187297   38829 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 18:35:57.187300   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187306   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 18:35:57.187309   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187313   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187321   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 18:35:57.187337   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 18:35:57.187340   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187344   38829 command_runner.go:130] >       "size":  "49822549",
	I1213 18:35:57.187348   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187352   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187355   38829 command_runner.go:130] >       },
	I1213 18:35:57.187358   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187362   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187364   38829 command_runner.go:130] >     },
	I1213 18:35:57.187367   38829 command_runner.go:130] >     {
	I1213 18:35:57.187374   38829 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 18:35:57.187378   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187382   38829 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.187385   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187389   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187396   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 18:35:57.187404   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 18:35:57.187407   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187410   38829 command_runner.go:130] >       "size":  "519884",
	I1213 18:35:57.187414   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187417   38829 command_runner.go:130] >         "value":  "65535"
	I1213 18:35:57.187420   38829 command_runner.go:130] >       },
	I1213 18:35:57.187424   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187428   38829 command_runner.go:130] >       "pinned":  true
	I1213 18:35:57.187431   38829 command_runner.go:130] >     }
	I1213 18:35:57.187434   38829 command_runner.go:130] >   ]
	I1213 18:35:57.187440   38829 command_runner.go:130] > }
	I1213 18:35:57.187570   38829 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:35:57.187578   38829 cache_images.go:86] Images are preloaded, skipping loading
	I1213 18:35:57.187585   38829 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 18:35:57.187672   38829 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-752103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 18:35:57.187756   38829 ssh_runner.go:195] Run: crio config
	I1213 18:35:57.235276   38829 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 18:35:57.235304   38829 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 18:35:57.235312   38829 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 18:35:57.235316   38829 command_runner.go:130] > #
	I1213 18:35:57.235323   38829 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 18:35:57.235330   38829 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 18:35:57.235336   38829 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 18:35:57.235344   38829 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 18:35:57.235351   38829 command_runner.go:130] > # reload'.
	I1213 18:35:57.235358   38829 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 18:35:57.235367   38829 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 18:35:57.235374   38829 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 18:35:57.235386   38829 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 18:35:57.235390   38829 command_runner.go:130] > [crio]
	I1213 18:35:57.235397   38829 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 18:35:57.235406   38829 command_runner.go:130] > # containers images, in this directory.
	I1213 18:35:57.235421   38829 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 18:35:57.235432   38829 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 18:35:57.235437   38829 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 18:35:57.235445   38829 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 18:35:57.235452   38829 command_runner.go:130] > # imagestore = ""
	I1213 18:35:57.235458   38829 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 18:35:57.235468   38829 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 18:35:57.235475   38829 command_runner.go:130] > # storage_driver = "overlay"
	I1213 18:35:57.235481   38829 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 18:35:57.235491   38829 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 18:35:57.235495   38829 command_runner.go:130] > # storage_option = [
	I1213 18:35:57.235502   38829 command_runner.go:130] > # ]
	I1213 18:35:57.235511   38829 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 18:35:57.235518   38829 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 18:35:57.235533   38829 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 18:35:57.235539   38829 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 18:35:57.235547   38829 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 18:35:57.235554   38829 command_runner.go:130] > # always happen on a node reboot
	I1213 18:35:57.235660   38829 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 18:35:57.235692   38829 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 18:35:57.235700   38829 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 18:35:57.235705   38829 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 18:35:57.235710   38829 command_runner.go:130] > # version_file_persist = ""
	I1213 18:35:57.235718   38829 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 18:35:57.235727   38829 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 18:35:57.235730   38829 command_runner.go:130] > # internal_wipe = true
	I1213 18:35:57.235739   38829 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 18:35:57.235744   38829 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 18:35:57.235748   38829 command_runner.go:130] > # internal_repair = true
	I1213 18:35:57.235754   38829 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 18:35:57.235760   38829 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 18:35:57.235769   38829 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 18:35:57.235775   38829 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 18:35:57.235781   38829 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 18:35:57.235784   38829 command_runner.go:130] > [crio.api]
	I1213 18:35:57.235790   38829 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 18:35:57.235795   38829 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 18:35:57.235800   38829 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 18:35:57.235804   38829 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 18:35:57.235811   38829 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 18:35:57.235816   38829 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 18:35:57.235819   38829 command_runner.go:130] > # stream_port = "0"
	I1213 18:35:57.235824   38829 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 18:35:57.235828   38829 command_runner.go:130] > # stream_enable_tls = false
	I1213 18:35:57.235838   38829 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 18:35:57.235842   38829 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 18:35:57.235849   38829 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 18:35:57.235854   38829 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 18:35:57.235858   38829 command_runner.go:130] > # stream_tls_cert = ""
	I1213 18:35:57.235864   38829 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 18:35:57.235869   38829 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 18:35:57.235873   38829 command_runner.go:130] > # stream_tls_key = ""
	I1213 18:35:57.235880   38829 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 18:35:57.235886   38829 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 18:35:57.235892   38829 command_runner.go:130] > # automatically pick up the changes.
	I1213 18:35:57.235896   38829 command_runner.go:130] > # stream_tls_ca = ""
	I1213 18:35:57.235914   38829 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 18:35:57.235918   38829 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 18:35:57.235926   38829 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 18:35:57.235930   38829 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 18:35:57.235936   38829 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 18:35:57.235942   38829 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 18:35:57.235945   38829 command_runner.go:130] > [crio.runtime]
	I1213 18:35:57.235951   38829 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 18:35:57.235956   38829 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 18:35:57.235960   38829 command_runner.go:130] > # "nofile=1024:2048"
	I1213 18:35:57.235965   38829 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 18:35:57.235969   38829 command_runner.go:130] > # default_ulimits = [
	I1213 18:35:57.235972   38829 command_runner.go:130] > # ]
	I1213 18:35:57.235978   38829 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 18:35:57.236231   38829 command_runner.go:130] > # no_pivot = false
	I1213 18:35:57.236246   38829 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 18:35:57.236252   38829 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 18:35:57.236258   38829 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 18:35:57.236264   38829 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 18:35:57.236272   38829 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 18:35:57.236280   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 18:35:57.236292   38829 command_runner.go:130] > # conmon = ""
	I1213 18:35:57.236297   38829 command_runner.go:130] > # Cgroup setting for conmon
	I1213 18:35:57.236304   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 18:35:57.236308   38829 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 18:35:57.236314   38829 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 18:35:57.236320   38829 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 18:35:57.236335   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 18:35:57.236339   38829 command_runner.go:130] > # conmon_env = [
	I1213 18:35:57.236342   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236348   38829 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 18:35:57.236353   38829 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 18:35:57.236358   38829 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 18:35:57.236362   38829 command_runner.go:130] > # default_env = [
	I1213 18:35:57.236365   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236370   38829 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 18:35:57.236378   38829 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 18:35:57.236386   38829 command_runner.go:130] > # selinux = false
	I1213 18:35:57.236397   38829 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 18:35:57.236405   38829 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 18:35:57.236415   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236419   38829 command_runner.go:130] > # seccomp_profile = ""
	I1213 18:35:57.236425   38829 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 18:35:57.236436   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236440   38829 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 18:35:57.236447   38829 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 18:35:57.236457   38829 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 18:35:57.236464   38829 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 18:35:57.236470   38829 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 18:35:57.236477   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236482   38829 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 18:35:57.236493   38829 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 18:35:57.236497   38829 command_runner.go:130] > # the cgroup blockio controller.
	I1213 18:35:57.236501   38829 command_runner.go:130] > # blockio_config_file = ""
	I1213 18:35:57.236512   38829 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 18:35:57.236519   38829 command_runner.go:130] > # blockio parameters.
	I1213 18:35:57.236524   38829 command_runner.go:130] > # blockio_reload = false
	I1213 18:35:57.236530   38829 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 18:35:57.236538   38829 command_runner.go:130] > # irqbalance daemon.
	I1213 18:35:57.236543   38829 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 18:35:57.236550   38829 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 18:35:57.236560   38829 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 18:35:57.236567   38829 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 18:35:57.236573   38829 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 18:35:57.236579   38829 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 18:35:57.236584   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236589   38829 command_runner.go:130] > # rdt_config_file = ""
	I1213 18:35:57.236594   38829 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 18:35:57.236600   38829 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 18:35:57.236606   38829 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 18:35:57.236612   38829 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 18:35:57.236619   38829 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 18:35:57.236626   38829 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 18:35:57.236633   38829 command_runner.go:130] > # will be added.
	I1213 18:35:57.236637   38829 command_runner.go:130] > # default_capabilities = [
	I1213 18:35:57.236640   38829 command_runner.go:130] > # 	"CHOWN",
	I1213 18:35:57.236644   38829 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 18:35:57.236647   38829 command_runner.go:130] > # 	"FSETID",
	I1213 18:35:57.236650   38829 command_runner.go:130] > # 	"FOWNER",
	I1213 18:35:57.236653   38829 command_runner.go:130] > # 	"SETGID",
	I1213 18:35:57.236656   38829 command_runner.go:130] > # 	"SETUID",
	I1213 18:35:57.236674   38829 command_runner.go:130] > # 	"SETPCAP",
	I1213 18:35:57.236679   38829 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 18:35:57.236682   38829 command_runner.go:130] > # 	"KILL",
	I1213 18:35:57.236685   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236693   38829 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 18:35:57.236702   38829 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 18:35:57.236710   38829 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 18:35:57.236716   38829 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 18:35:57.236722   38829 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 18:35:57.236726   38829 command_runner.go:130] > default_sysctls = [
	I1213 18:35:57.236731   38829 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 18:35:57.236734   38829 command_runner.go:130] > ]
	I1213 18:35:57.236738   38829 command_runner.go:130] > # List of devices on the host that a
	I1213 18:35:57.236748   38829 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 18:35:57.236755   38829 command_runner.go:130] > # allowed_devices = [
	I1213 18:35:57.236758   38829 command_runner.go:130] > # 	"/dev/fuse",
	I1213 18:35:57.236762   38829 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 18:35:57.236772   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236777   38829 command_runner.go:130] > # List of additional devices. specified as
	I1213 18:35:57.236784   38829 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 18:35:57.236794   38829 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 18:35:57.236800   38829 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 18:35:57.236804   38829 command_runner.go:130] > # additional_devices = [
	I1213 18:35:57.236832   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236837   38829 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 18:35:57.236841   38829 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 18:35:57.236844   38829 command_runner.go:130] > # 	"/etc/cdi",
	I1213 18:35:57.236848   38829 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 18:35:57.236854   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236861   38829 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 18:35:57.236870   38829 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 18:35:57.236874   38829 command_runner.go:130] > # Defaults to false.
	I1213 18:35:57.236880   38829 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 18:35:57.236891   38829 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 18:35:57.236898   38829 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 18:35:57.236901   38829 command_runner.go:130] > # hooks_dir = [
	I1213 18:35:57.236908   38829 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 18:35:57.236915   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236921   38829 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 18:35:57.236931   38829 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 18:35:57.236939   38829 command_runner.go:130] > # its default mounts from the following two files:
	I1213 18:35:57.236942   38829 command_runner.go:130] > #
	I1213 18:35:57.236949   38829 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 18:35:57.236959   38829 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 18:35:57.236964   38829 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 18:35:57.236967   38829 command_runner.go:130] > #
	I1213 18:35:57.236974   38829 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 18:35:57.236984   38829 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 18:35:57.236990   38829 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 18:35:57.236996   38829 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 18:35:57.237024   38829 command_runner.go:130] > #
	I1213 18:35:57.237029   38829 command_runner.go:130] > # default_mounts_file = ""
	I1213 18:35:57.237035   38829 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 18:35:57.237044   38829 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 18:35:57.237052   38829 command_runner.go:130] > # pids_limit = -1
	I1213 18:35:57.237058   38829 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 18:35:57.237065   38829 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 18:35:57.237075   38829 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 18:35:57.237084   38829 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 18:35:57.237092   38829 command_runner.go:130] > # log_size_max = -1
	I1213 18:35:57.237099   38829 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 18:35:57.237104   38829 command_runner.go:130] > # log_to_journald = false
	I1213 18:35:57.237114   38829 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 18:35:57.237119   38829 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 18:35:57.237125   38829 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 18:35:57.237130   38829 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 18:35:57.237137   38829 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 18:35:57.237145   38829 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 18:35:57.237151   38829 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 18:35:57.237155   38829 command_runner.go:130] > # read_only = false
	I1213 18:35:57.237162   38829 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 18:35:57.237173   38829 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 18:35:57.237181   38829 command_runner.go:130] > # live configuration reload.
	I1213 18:35:57.237191   38829 command_runner.go:130] > # log_level = "info"
	I1213 18:35:57.237200   38829 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 18:35:57.237212   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.237216   38829 command_runner.go:130] > # log_filter = ""
	I1213 18:35:57.237222   38829 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 18:35:57.237228   38829 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 18:35:57.237237   38829 command_runner.go:130] > # separated by comma.
	I1213 18:35:57.237245   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237249   38829 command_runner.go:130] > # uid_mappings = ""
	I1213 18:35:57.237255   38829 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 18:35:57.237265   38829 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 18:35:57.237269   38829 command_runner.go:130] > # separated by comma.
	I1213 18:35:57.237277   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237284   38829 command_runner.go:130] > # gid_mappings = ""
	I1213 18:35:57.237290   38829 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 18:35:57.237297   38829 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 18:35:57.237311   38829 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 18:35:57.237319   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237323   38829 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 18:35:57.237329   38829 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 18:35:57.237339   38829 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 18:35:57.237345   38829 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 18:35:57.237354   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237949   38829 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 18:35:57.237966   38829 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 18:35:57.237972   38829 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 18:35:57.237979   38829 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 18:35:57.238476   38829 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 18:35:57.238490   38829 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 18:35:57.238497   38829 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 18:35:57.238503   38829 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 18:35:57.238519   38829 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 18:35:57.238932   38829 command_runner.go:130] > # drop_infra_ctr = true
	I1213 18:35:57.238947   38829 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 18:35:57.238955   38829 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 18:35:57.238963   38829 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 18:35:57.239291   38829 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 18:35:57.239306   38829 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 18:35:57.239313   38829 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 18:35:57.239319   38829 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 18:35:57.239324   38829 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 18:35:57.239634   38829 command_runner.go:130] > # shared_cpuset = ""
	I1213 18:35:57.239648   38829 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 18:35:57.239654   38829 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 18:35:57.240060   38829 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 18:35:57.240075   38829 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 18:35:57.240414   38829 command_runner.go:130] > # pinns_path = ""
	I1213 18:35:57.240427   38829 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 18:35:57.240434   38829 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 18:35:57.240846   38829 command_runner.go:130] > # enable_criu_support = true
	I1213 18:35:57.240873   38829 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 18:35:57.240881   38829 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 18:35:57.241322   38829 command_runner.go:130] > # enable_pod_events = false
	I1213 18:35:57.241336   38829 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 18:35:57.241342   38829 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 18:35:57.241756   38829 command_runner.go:130] > # default_runtime = "crun"
	I1213 18:35:57.241768   38829 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 18:35:57.241777   38829 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 18:35:57.241786   38829 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 18:35:57.241791   38829 command_runner.go:130] > # creation as a file is not desired either.
	I1213 18:35:57.241800   38829 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 18:35:57.241820   38829 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 18:35:57.242010   38829 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 18:35:57.242355   38829 command_runner.go:130] > # ]
	I1213 18:35:57.242370   38829 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 18:35:57.242386   38829 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 18:35:57.242394   38829 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 18:35:57.242400   38829 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 18:35:57.242406   38829 command_runner.go:130] > #
	I1213 18:35:57.242412   38829 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 18:35:57.242419   38829 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 18:35:57.242423   38829 command_runner.go:130] > # runtime_type = "oci"
	I1213 18:35:57.242427   38829 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 18:35:57.242434   38829 command_runner.go:130] > # inherit_default_runtime = false
	I1213 18:35:57.242441   38829 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 18:35:57.242445   38829 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 18:35:57.242449   38829 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 18:35:57.242460   38829 command_runner.go:130] > # monitor_env = []
	I1213 18:35:57.242465   38829 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 18:35:57.242470   38829 command_runner.go:130] > # allowed_annotations = []
	I1213 18:35:57.242487   38829 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 18:35:57.242491   38829 command_runner.go:130] > # no_sync_log = false
	I1213 18:35:57.242496   38829 command_runner.go:130] > # default_annotations = {}
	I1213 18:35:57.242500   38829 command_runner.go:130] > # stream_websockets = false
	I1213 18:35:57.242507   38829 command_runner.go:130] > # seccomp_profile = ""
	I1213 18:35:57.242553   38829 command_runner.go:130] > # Where:
	I1213 18:35:57.242564   38829 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 18:35:57.242570   38829 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 18:35:57.242577   38829 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 18:35:57.242583   38829 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 18:35:57.242587   38829 command_runner.go:130] > #   in $PATH.
	I1213 18:35:57.242593   38829 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 18:35:57.242598   38829 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 18:35:57.242614   38829 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 18:35:57.242620   38829 command_runner.go:130] > #   state.
	I1213 18:35:57.242626   38829 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 18:35:57.242633   38829 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 18:35:57.242641   38829 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 18:35:57.242647   38829 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 18:35:57.242652   38829 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 18:35:57.242659   38829 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 18:35:57.242665   38829 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 18:35:57.242671   38829 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 18:35:57.242684   38829 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 18:35:57.242694   38829 command_runner.go:130] > #   The currently recognized values are:
	I1213 18:35:57.242701   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 18:35:57.242709   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 18:35:57.242718   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 18:35:57.242724   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 18:35:57.242736   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 18:35:57.242745   38829 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 18:35:57.242761   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 18:35:57.242774   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 18:35:57.242781   38829 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 18:35:57.242788   38829 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 18:35:57.242795   38829 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 18:35:57.242802   38829 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 18:35:57.242813   38829 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 18:35:57.242824   38829 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 18:35:57.242842   38829 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 18:35:57.242850   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 18:35:57.242861   38829 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 18:35:57.242865   38829 command_runner.go:130] > #   deprecated option "conmon".
	I1213 18:35:57.242873   38829 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 18:35:57.242881   38829 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 18:35:57.242888   38829 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 18:35:57.242894   38829 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 18:35:57.242911   38829 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 18:35:57.242917   38829 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 18:35:57.242924   38829 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 18:35:57.242933   38829 command_runner.go:130] > #   conmon-rs by using:
	I1213 18:35:57.242941   38829 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 18:35:57.242954   38829 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 18:35:57.242962   38829 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 18:35:57.242973   38829 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 18:35:57.242978   38829 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 18:35:57.242995   38829 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 18:35:57.243003   38829 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 18:35:57.243008   38829 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 18:35:57.243017   38829 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 18:35:57.243027   38829 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 18:35:57.243033   38829 command_runner.go:130] > #   when a machine crash happens.
	I1213 18:35:57.243040   38829 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 18:35:57.243049   38829 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 18:35:57.243065   38829 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 18:35:57.243070   38829 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 18:35:57.243076   38829 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 18:35:57.243084   38829 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 18:35:57.243094   38829 command_runner.go:130] > #
	I1213 18:35:57.243099   38829 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 18:35:57.243102   38829 command_runner.go:130] > #
	I1213 18:35:57.243113   38829 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 18:35:57.243123   38829 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 18:35:57.243126   38829 command_runner.go:130] > #
	I1213 18:35:57.243139   38829 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 18:35:57.243153   38829 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 18:35:57.243157   38829 command_runner.go:130] > #
	I1213 18:35:57.243163   38829 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 18:35:57.243170   38829 command_runner.go:130] > # feature.
	I1213 18:35:57.243173   38829 command_runner.go:130] > #
	I1213 18:35:57.243179   38829 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 18:35:57.243186   38829 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 18:35:57.243196   38829 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 18:35:57.243208   38829 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 18:35:57.243219   38829 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 18:35:57.243222   38829 command_runner.go:130] > #
	I1213 18:35:57.243229   38829 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 18:35:57.243235   38829 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 18:35:57.243256   38829 command_runner.go:130] > #
	I1213 18:35:57.243267   38829 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 18:35:57.243274   38829 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 18:35:57.243283   38829 command_runner.go:130] > #
	I1213 18:35:57.243294   38829 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 18:35:57.243301   38829 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 18:35:57.243304   38829 command_runner.go:130] > # limitation.
	I1213 18:35:57.243341   38829 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 18:35:57.243623   38829 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 18:35:57.243757   38829 command_runner.go:130] > runtime_type = ""
	I1213 18:35:57.244003   38829 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 18:35:57.244255   38829 command_runner.go:130] > inherit_default_runtime = false
	I1213 18:35:57.244399   38829 command_runner.go:130] > runtime_config_path = ""
	I1213 18:35:57.244539   38829 command_runner.go:130] > container_min_memory = ""
	I1213 18:35:57.244777   38829 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 18:35:57.245055   38829 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 18:35:57.245214   38829 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 18:35:57.245448   38829 command_runner.go:130] > allowed_annotations = [
	I1213 18:35:57.245605   38829 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 18:35:57.245830   38829 command_runner.go:130] > ]
	I1213 18:35:57.246064   38829 command_runner.go:130] > privileged_without_host_devices = false
	I1213 18:35:57.246554   38829 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 18:35:57.246808   38829 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 18:35:57.246935   38829 command_runner.go:130] > runtime_type = ""
	I1213 18:35:57.247167   38829 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 18:35:57.247404   38829 command_runner.go:130] > inherit_default_runtime = false
	I1213 18:35:57.247591   38829 command_runner.go:130] > runtime_config_path = ""
	I1213 18:35:57.247761   38829 command_runner.go:130] > container_min_memory = ""
	I1213 18:35:57.248046   38829 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 18:35:57.248332   38829 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 18:35:57.248492   38829 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 18:35:57.248957   38829 command_runner.go:130] > privileged_without_host_devices = false
	I1213 18:35:57.249339   38829 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 18:35:57.249353   38829 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 18:35:57.249360   38829 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 18:35:57.249369   38829 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 18:35:57.249380   38829 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 18:35:57.249391   38829 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 18:35:57.249420   38829 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 18:35:57.249432   38829 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 18:35:57.249442   38829 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 18:35:57.249454   38829 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 18:35:57.249460   38829 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 18:35:57.249474   38829 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 18:35:57.249483   38829 command_runner.go:130] > # Example:
	I1213 18:35:57.249488   38829 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 18:35:57.249494   38829 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 18:35:57.249507   38829 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 18:35:57.249513   38829 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 18:35:57.249522   38829 command_runner.go:130] > # cpuset = "0-1"
	I1213 18:35:57.249525   38829 command_runner.go:130] > # cpushares = "5"
	I1213 18:35:57.249529   38829 command_runner.go:130] > # cpuquota = "1000"
	I1213 18:35:57.249533   38829 command_runner.go:130] > # cpuperiod = "100000"
	I1213 18:35:57.249548   38829 command_runner.go:130] > # cpulimit = "35"
	I1213 18:35:57.249556   38829 command_runner.go:130] > # Where:
	I1213 18:35:57.249560   38829 command_runner.go:130] > # The workload name is workload-type.
	I1213 18:35:57.249568   38829 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 18:35:57.249574   38829 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 18:35:57.249585   38829 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 18:35:57.249594   38829 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 18:35:57.249604   38829 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 18:35:57.249739   38829 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 18:35:57.249752   38829 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 18:35:57.249757   38829 command_runner.go:130] > # Default value is set to true
	I1213 18:35:57.250196   38829 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 18:35:57.250210   38829 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 18:35:57.250216   38829 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 18:35:57.250220   38829 command_runner.go:130] > # Default value is set to 'false'
	I1213 18:35:57.250699   38829 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 18:35:57.250712   38829 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 18:35:57.250722   38829 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 18:35:57.251071   38829 command_runner.go:130] > # timezone = ""
	I1213 18:35:57.251082   38829 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 18:35:57.251086   38829 command_runner.go:130] > #
	I1213 18:35:57.251093   38829 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 18:35:57.251100   38829 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 18:35:57.251103   38829 command_runner.go:130] > [crio.image]
	I1213 18:35:57.251109   38829 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 18:35:57.251555   38829 command_runner.go:130] > # default_transport = "docker://"
	I1213 18:35:57.251569   38829 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 18:35:57.251576   38829 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 18:35:57.251964   38829 command_runner.go:130] > # global_auth_file = ""
	I1213 18:35:57.251977   38829 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 18:35:57.251982   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.252443   38829 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.252459   38829 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 18:35:57.252468   38829 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 18:35:57.252474   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.252817   38829 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 18:35:57.252830   38829 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 18:35:57.252837   38829 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 18:35:57.252844   38829 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 18:35:57.252849   38829 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 18:35:57.253309   38829 command_runner.go:130] > # pause_command = "/pause"
	I1213 18:35:57.253323   38829 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 18:35:57.253330   38829 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 18:35:57.253336   38829 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 18:35:57.253342   38829 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 18:35:57.253349   38829 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 18:35:57.253355   38829 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 18:35:57.253590   38829 command_runner.go:130] > # pinned_images = [
	I1213 18:35:57.253600   38829 command_runner.go:130] > # ]
	I1213 18:35:57.253607   38829 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 18:35:57.253614   38829 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 18:35:57.253621   38829 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 18:35:57.253627   38829 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 18:35:57.253636   38829 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 18:35:57.253910   38829 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 18:35:57.253925   38829 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 18:35:57.253939   38829 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 18:35:57.253949   38829 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 18:35:57.253960   38829 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 18:35:57.253967   38829 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 18:35:57.253980   38829 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 18:35:57.253986   38829 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 18:35:57.253995   38829 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 18:35:57.254000   38829 command_runner.go:130] > # changing them here.
	I1213 18:35:57.254012   38829 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 18:35:57.254016   38829 command_runner.go:130] > # insecure_registries = [
	I1213 18:35:57.254268   38829 command_runner.go:130] > # ]
	I1213 18:35:57.254281   38829 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 18:35:57.254287   38829 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 18:35:57.254424   38829 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 18:35:57.254436   38829 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 18:35:57.254580   38829 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 18:35:57.254592   38829 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 18:35:57.254600   38829 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 18:35:57.254897   38829 command_runner.go:130] > # auto_reload_registries = false
	I1213 18:35:57.254910   38829 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 18:35:57.254920   38829 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 18:35:57.254926   38829 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 18:35:57.254930   38829 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 18:35:57.254935   38829 command_runner.go:130] > # The mode of short name resolution.
	I1213 18:35:57.254941   38829 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 18:35:57.254949   38829 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 18:35:57.254965   38829 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 18:35:57.254970   38829 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 18:35:57.254982   38829 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 18:35:57.254988   38829 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 18:35:57.255234   38829 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 18:35:57.255247   38829 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 18:35:57.255251   38829 command_runner.go:130] > # CNI plugins.
	I1213 18:35:57.255254   38829 command_runner.go:130] > [crio.network]
	I1213 18:35:57.255260   38829 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 18:35:57.255266   38829 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 18:35:57.255275   38829 command_runner.go:130] > # cni_default_network = ""
	I1213 18:35:57.255283   38829 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 18:35:57.255416   38829 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 18:35:57.255429   38829 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 18:35:57.255573   38829 command_runner.go:130] > # plugin_dirs = [
	I1213 18:35:57.255807   38829 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 18:35:57.255816   38829 command_runner.go:130] > # ]
	I1213 18:35:57.255821   38829 command_runner.go:130] > # List of included pod metrics.
	I1213 18:35:57.255825   38829 command_runner.go:130] > # included_pod_metrics = [
	I1213 18:35:57.255828   38829 command_runner.go:130] > # ]
	I1213 18:35:57.255834   38829 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 18:35:57.255838   38829 command_runner.go:130] > [crio.metrics]
	I1213 18:35:57.255843   38829 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 18:35:57.255847   38829 command_runner.go:130] > # enable_metrics = false
	I1213 18:35:57.255851   38829 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 18:35:57.255867   38829 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 18:35:57.255879   38829 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 18:35:57.255889   38829 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 18:35:57.255900   38829 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 18:35:57.255905   38829 command_runner.go:130] > # metrics_collectors = [
	I1213 18:35:57.256016   38829 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 18:35:57.256027   38829 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 18:35:57.256031   38829 command_runner.go:130] > # 	"containers_oom_total",
	I1213 18:35:57.256331   38829 command_runner.go:130] > # 	"processes_defunct",
	I1213 18:35:57.256341   38829 command_runner.go:130] > # 	"operations_total",
	I1213 18:35:57.256346   38829 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 18:35:57.256351   38829 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 18:35:57.256361   38829 command_runner.go:130] > # 	"operations_errors_total",
	I1213 18:35:57.256365   38829 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 18:35:57.256370   38829 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 18:35:57.256374   38829 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 18:35:57.257117   38829 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 18:35:57.257132   38829 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 18:35:57.257137   38829 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 18:35:57.257143   38829 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 18:35:57.257155   38829 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 18:35:57.257161   38829 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 18:35:57.257170   38829 command_runner.go:130] > # ]
	I1213 18:35:57.257177   38829 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 18:35:57.257185   38829 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 18:35:57.257191   38829 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 18:35:57.257199   38829 command_runner.go:130] > # metrics_port = 9090
	I1213 18:35:57.257204   38829 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 18:35:57.257212   38829 command_runner.go:130] > # metrics_socket = ""
	I1213 18:35:57.257233   38829 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 18:35:57.257245   38829 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 18:35:57.257252   38829 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 18:35:57.257260   38829 command_runner.go:130] > # certificate on any modification event.
	I1213 18:35:57.257270   38829 command_runner.go:130] > # metrics_cert = ""
	I1213 18:35:57.257276   38829 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 18:35:57.257285   38829 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 18:35:57.257289   38829 command_runner.go:130] > # metrics_key = ""
	I1213 18:35:57.257299   38829 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 18:35:57.257318   38829 command_runner.go:130] > [crio.tracing]
	I1213 18:35:57.257325   38829 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 18:35:57.257329   38829 command_runner.go:130] > # enable_tracing = false
	I1213 18:35:57.257339   38829 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 18:35:57.257343   38829 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 18:35:57.257354   38829 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 18:35:57.257366   38829 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 18:35:57.257381   38829 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 18:35:57.257393   38829 command_runner.go:130] > [crio.nri]
	I1213 18:35:57.257402   38829 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 18:35:57.257406   38829 command_runner.go:130] > # enable_nri = true
	I1213 18:35:57.257410   38829 command_runner.go:130] > # NRI socket to listen on.
	I1213 18:35:57.257415   38829 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 18:35:57.257423   38829 command_runner.go:130] > # NRI plugin directory to use.
	I1213 18:35:57.257428   38829 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 18:35:57.257437   38829 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 18:35:57.257442   38829 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 18:35:57.257457   38829 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 18:35:57.257514   38829 command_runner.go:130] > # nri_disable_connections = false
	I1213 18:35:57.257530   38829 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 18:35:57.257535   38829 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 18:35:57.257544   38829 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 18:35:57.257549   38829 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 18:35:57.257558   38829 command_runner.go:130] > # NRI default validator configuration.
	I1213 18:35:57.257566   38829 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 18:35:57.257576   38829 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 18:35:57.257584   38829 command_runner.go:130] > # can be restricted/rejected:
	I1213 18:35:57.257588   38829 command_runner.go:130] > # - OCI hook injection
	I1213 18:35:57.257597   38829 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 18:35:57.257609   38829 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 18:35:57.257615   38829 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 18:35:57.257624   38829 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 18:35:57.257632   38829 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 18:35:57.257642   38829 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 18:35:57.257652   38829 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 18:35:57.257660   38829 command_runner.go:130] > #
	I1213 18:35:57.257664   38829 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 18:35:57.257672   38829 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 18:35:57.257686   38829 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 18:35:57.257692   38829 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 18:35:57.257699   38829 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 18:35:57.257712   38829 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 18:35:57.257721   38829 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 18:35:57.257726   38829 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 18:35:57.257732   38829 command_runner.go:130] > # ]
	I1213 18:35:57.257738   38829 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 18:35:57.257747   38829 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 18:35:57.257763   38829 command_runner.go:130] > [crio.stats]
	I1213 18:35:57.257772   38829 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 18:35:57.257778   38829 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 18:35:57.257782   38829 command_runner.go:130] > # stats_collection_period = 0
	I1213 18:35:57.257792   38829 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 18:35:57.257800   38829 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 18:35:57.257809   38829 command_runner.go:130] > # collection_period = 0
	I1213 18:35:57.259571   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.21464252Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 18:35:57.259589   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214677794Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 18:35:57.259613   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214706635Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 18:35:57.259625   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.21473084Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 18:35:57.259635   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214801782Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:57.259643   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.215251382Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 18:35:57.259658   38829 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 18:35:57.259749   38829 cni.go:84] Creating CNI manager for ""
	I1213 18:35:57.259765   38829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:35:57.259800   38829 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 18:35:57.259831   38829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-752103 NodeName:functional-752103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 18:35:57.259972   38829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-752103"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 18:35:57.260053   38829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 18:35:57.267743   38829 command_runner.go:130] > kubeadm
	I1213 18:35:57.267764   38829 command_runner.go:130] > kubectl
	I1213 18:35:57.267769   38829 command_runner.go:130] > kubelet
	I1213 18:35:57.268114   38829 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 18:35:57.268211   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 18:35:57.275739   38829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 18:35:57.288967   38829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 18:35:57.301790   38829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 18:35:57.314673   38829 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 18:35:57.318486   38829 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 18:35:57.318580   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:57.437137   38829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:35:57.456752   38829 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103 for IP: 192.168.49.2
	I1213 18:35:57.456776   38829 certs.go:195] generating shared ca certs ...
	I1213 18:35:57.456809   38829 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:57.456950   38829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 18:35:57.457003   38829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 18:35:57.457091   38829 certs.go:257] generating profile certs ...
	I1213 18:35:57.457200   38829 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key
	I1213 18:35:57.457253   38829 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key.597c6026
	I1213 18:35:57.457304   38829 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key
	I1213 18:35:57.457312   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 18:35:57.457324   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 18:35:57.457340   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 18:35:57.457356   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 18:35:57.457367   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 18:35:57.457383   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 18:35:57.457395   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 18:35:57.457405   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 18:35:57.457457   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 18:35:57.457490   38829 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 18:35:57.457499   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 18:35:57.457529   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 18:35:57.457562   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 18:35:57.457593   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 18:35:57.457644   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:35:57.457676   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.457691   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.457705   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.458319   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 18:35:57.479443   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 18:35:57.498974   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 18:35:57.520210   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 18:35:57.540966   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 18:35:57.558774   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 18:35:57.576442   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 18:35:57.593767   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 18:35:57.611061   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 18:35:57.628952   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 18:35:57.646627   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 18:35:57.664290   38829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 18:35:57.677693   38829 ssh_runner.go:195] Run: openssl version
	I1213 18:35:57.683465   38829 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 18:35:57.683918   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.691710   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 18:35:57.699237   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.702943   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.702972   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.703038   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.743436   38829 command_runner.go:130] > 51391683
	I1213 18:35:57.743914   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 18:35:57.751320   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.758498   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 18:35:57.765907   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769321   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769343   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769391   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.809666   38829 command_runner.go:130] > 3ec20f2e
	I1213 18:35:57.810146   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 18:35:57.818335   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.826660   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 18:35:57.834746   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838666   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838764   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838851   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.879619   38829 command_runner.go:130] > b5213941
	I1213 18:35:57.880088   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 18:35:57.887654   38829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:35:57.891412   38829 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:35:57.891437   38829 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 18:35:57.891445   38829 command_runner.go:130] > Device: 259,1	Inode: 1056084     Links: 1
	I1213 18:35:57.891452   38829 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 18:35:57.891459   38829 command_runner.go:130] > Access: 2025-12-13 18:31:50.964784337 +0000
	I1213 18:35:57.891465   38829 command_runner.go:130] > Modify: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891470   38829 command_runner.go:130] > Change: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891475   38829 command_runner.go:130] >  Birth: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891539   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 18:35:57.937033   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:57.937482   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 18:35:57.978137   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:57.978564   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 18:35:58.033951   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.034441   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 18:35:58.075936   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.076412   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 18:35:58.118212   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.118338   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 18:35:58.159347   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.159444   38829 kubeadm.go:401] StartCluster: {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:58.159559   38829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:35:58.159642   38829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:35:58.186428   38829 cri.go:89] found id: ""
	I1213 18:35:58.186502   38829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 18:35:58.193645   38829 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 18:35:58.193670   38829 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 18:35:58.193678   38829 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 18:35:58.194604   38829 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 18:35:58.194674   38829 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 18:35:58.194749   38829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 18:35:58.202237   38829 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:35:58.202735   38829 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-752103" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.202850   38829 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-2686/kubeconfig needs updating (will repair): [kubeconfig missing "functional-752103" cluster setting kubeconfig missing "functional-752103" context setting]
	I1213 18:35:58.203123   38829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.203546   38829 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.203705   38829 kapi.go:59] client config for functional-752103: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 18:35:58.204223   38829 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 18:35:58.204247   38829 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 18:35:58.204258   38829 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 18:35:58.204263   38829 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 18:35:58.204267   38829 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 18:35:58.204300   38829 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 18:35:58.204536   38829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 18:35:58.212005   38829 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 18:35:58.212037   38829 kubeadm.go:602] duration metric: took 17.346627ms to restartPrimaryControlPlane
	I1213 18:35:58.212045   38829 kubeadm.go:403] duration metric: took 52.608163ms to StartCluster
	I1213 18:35:58.212060   38829 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.212116   38829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.212712   38829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.212903   38829 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 18:35:58.213488   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:58.213543   38829 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 18:35:58.213607   38829 addons.go:70] Setting storage-provisioner=true in profile "functional-752103"
	I1213 18:35:58.213620   38829 addons.go:239] Setting addon storage-provisioner=true in "functional-752103"
	I1213 18:35:58.213643   38829 host.go:66] Checking if "functional-752103" exists ...
	I1213 18:35:58.214229   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.214390   38829 addons.go:70] Setting default-storageclass=true in profile "functional-752103"
	I1213 18:35:58.214412   38829 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-752103"
	I1213 18:35:58.214713   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.219256   38829 out.go:179] * Verifying Kubernetes components...
	I1213 18:35:58.222143   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:58.244199   38829 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 18:35:58.247016   38829 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:58.247042   38829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 18:35:58.247112   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:58.257520   38829 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.257687   38829 kapi.go:59] client config for functional-752103: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 18:35:58.257971   38829 addons.go:239] Setting addon default-storageclass=true in "functional-752103"
	I1213 18:35:58.258004   38829 host.go:66] Checking if "functional-752103" exists ...
	I1213 18:35:58.258425   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.277237   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:58.306835   38829 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:58.306855   38829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 18:35:58.306918   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:58.340724   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:58.416694   38829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:35:58.451165   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:58.493354   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.080268   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.080307   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080337   38829 retry.go:31] will retry after 153.209012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080385   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.080398   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080404   38829 retry.go:31] will retry after 291.62792ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080464   38829 node_ready.go:35] waiting up to 6m0s for node "functional-752103" to be "Ready" ...
	I1213 18:35:59.080578   38829 type.go:168] "Request Body" body=""
	I1213 18:35:59.080656   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:35:59.080963   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:35:59.234362   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:59.300149   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.300200   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.300219   38829 retry.go:31] will retry after 511.331502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.372301   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.426538   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.430102   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.430132   38829 retry.go:31] will retry after 426.700032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.581486   38829 type.go:168] "Request Body" body=""
	I1213 18:35:59.581586   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:35:59.581963   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:35:59.812414   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:59.857973   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.893611   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.893688   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.893723   38829 retry.go:31] will retry after 310.068383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.947559   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.947617   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.947640   38829 retry.go:31] will retry after 829.65637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.080795   38829 type.go:168] "Request Body" body=""
	I1213 18:36:00.080875   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:00.081240   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:00.205923   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:00.416702   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:00.416818   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.416873   38829 retry.go:31] will retry after 579.133816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.581369   38829 type.go:168] "Request Body" body=""
	I1213 18:36:00.581557   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:00.582010   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:00.778452   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:00.837536   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:00.837585   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.837604   38829 retry.go:31] will retry after 974.075863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.996954   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:01.059672   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:01.059714   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.059763   38829 retry.go:31] will retry after 1.136000803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.080856   38829 type.go:168] "Request Body" body=""
	I1213 18:36:01.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:01.081261   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:01.081306   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:01.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:36:01.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:01.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:01.812632   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:01.883701   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:01.883803   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.883825   38829 retry.go:31] will retry after 921.808005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.081109   38829 type.go:168] "Request Body" body=""
	I1213 18:36:02.081198   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:02.081477   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:02.196877   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:02.253907   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:02.257605   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.257637   38829 retry.go:31] will retry after 1.546462752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.581141   38829 type.go:168] "Request Body" body=""
	I1213 18:36:02.581286   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:02.581677   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:02.805901   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:02.889297   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:02.893182   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.893216   38829 retry.go:31] will retry after 1.247577285s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:03.081687   38829 type.go:168] "Request Body" body=""
	I1213 18:36:03.081764   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:03.082108   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:03.082162   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:03.580643   38829 type.go:168] "Request Body" body=""
	I1213 18:36:03.580714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:03.580995   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:03.804445   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:03.865304   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:03.865353   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:03.865372   38829 retry.go:31] will retry after 3.450909707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.080758   38829 type.go:168] "Request Body" body=""
	I1213 18:36:04.080837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:04.081202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:04.141517   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:04.204625   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:04.204670   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.204689   38829 retry.go:31] will retry after 3.409599879s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.581166   38829 type.go:168] "Request Body" body=""
	I1213 18:36:04.581250   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:04.581566   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:05.081373   38829 type.go:168] "Request Body" body=""
	I1213 18:36:05.081443   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:05.081739   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:05.581581   38829 type.go:168] "Request Body" body=""
	I1213 18:36:05.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:05.581992   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:05.582049   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:06.080707   38829 type.go:168] "Request Body" body=""
	I1213 18:36:06.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:06.081099   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:06.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:36:06.580849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:06.581220   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:36:07.080806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:07.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.316533   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:07.393411   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:07.397246   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.397278   38829 retry.go:31] will retry after 2.442447522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.581582   38829 type.go:168] "Request Body" body=""
	I1213 18:36:07.581660   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:07.582007   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.615412   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:07.670357   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:07.674453   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.674491   38829 retry.go:31] will retry after 4.254133001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:08.080696   38829 type.go:168] "Request Body" body=""
	I1213 18:36:08.080805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:08.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:08.081221   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:08.581149   38829 type.go:168] "Request Body" body=""
	I1213 18:36:08.581249   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:08.581593   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.081583   38829 type.go:168] "Request Body" body=""
	I1213 18:36:09.081656   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:09.081980   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.581654   38829 type.go:168] "Request Body" body=""
	I1213 18:36:09.581729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:09.582054   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.840484   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:09.900307   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:09.900343   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:09.900361   38829 retry.go:31] will retry after 4.640117862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:10.081715   38829 type.go:168] "Request Body" body=""
	I1213 18:36:10.081794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:10.082116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:10.082183   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:10.580872   38829 type.go:168] "Request Body" body=""
	I1213 18:36:10.580959   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:10.581373   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.080692   38829 type.go:168] "Request Body" body=""
	I1213 18:36:11.080776   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:11.081115   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.580824   38829 type.go:168] "Request Body" body=""
	I1213 18:36:11.580896   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:11.581249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.928812   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:11.987432   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:11.987481   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:11.987500   38829 retry.go:31] will retry after 8.251825899s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:12.081733   38829 type.go:168] "Request Body" body=""
	I1213 18:36:12.081819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:12.082391   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:12.082470   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:12.580663   38829 type.go:168] "Request Body" body=""
	I1213 18:36:12.580742   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:12.581100   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:13.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:36:13.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:13.081119   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:13.580828   38829 type.go:168] "Request Body" body=""
	I1213 18:36:13.580900   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:13.581257   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:14.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:36:14.081075   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:14.081364   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:14.540746   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:14.581321   38829 type.go:168] "Request Body" body=""
	I1213 18:36:14.581395   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:14.581672   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:14.581722   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:14.600534   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:14.600587   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:14.600605   38829 retry.go:31] will retry after 8.957681085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:15.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:36:15.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:15.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:15.580789   38829 type.go:168] "Request Body" body=""
	I1213 18:36:15.580868   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:15.581235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:16.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:36:16.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:16.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:16.580886   38829 type.go:168] "Request Body" body=""
	I1213 18:36:16.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:16.581330   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:17.081614   38829 type.go:168] "Request Body" body=""
	I1213 18:36:17.081684   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:17.081955   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:17.081995   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:17.580662   38829 type.go:168] "Request Body" body=""
	I1213 18:36:17.580732   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:17.581063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:18.080650   38829 type.go:168] "Request Body" body=""
	I1213 18:36:18.080721   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:18.081108   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:18.580672   38829 type.go:168] "Request Body" body=""
	I1213 18:36:18.580742   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:18.581079   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:19.081047   38829 type.go:168] "Request Body" body=""
	I1213 18:36:19.081115   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:19.081424   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:19.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:36:19.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:19.581191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:19.581284   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:20.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:36:20.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:20.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:20.239601   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:20.301361   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:20.301401   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:20.301420   38829 retry.go:31] will retry after 6.59814029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:20.580747   38829 type.go:168] "Request Body" body=""
	I1213 18:36:20.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:20.581125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:21.080844   38829 type.go:168] "Request Body" body=""
	I1213 18:36:21.080933   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:21.081353   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:21.580686   38829 type.go:168] "Request Body" body=""
	I1213 18:36:21.580762   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:21.581080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:22.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:36:22.080884   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:22.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:22.081274   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:22.580705   38829 type.go:168] "Request Body" body=""
	I1213 18:36:22.580799   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:22.581136   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.080675   38829 type.go:168] "Request Body" body=""
	I1213 18:36:23.080747   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:23.081137   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.558605   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:23.581258   38829 type.go:168] "Request Body" body=""
	I1213 18:36:23.581331   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:23.581605   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.617607   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:23.617653   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:23.617671   38829 retry.go:31] will retry after 14.669686806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:24.081419   38829 type.go:168] "Request Body" body=""
	I1213 18:36:24.081508   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:24.081878   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:24.081930   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:24.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:36:24.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:24.581024   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:25.080794   38829 type.go:168] "Request Body" body=""
	I1213 18:36:25.080880   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:25.081347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:25.580742   38829 type.go:168] "Request Body" body=""
	I1213 18:36:25.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:25.581207   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:26.080781   38829 type.go:168] "Request Body" body=""
	I1213 18:36:26.080854   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:26.081166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:26.580764   38829 type.go:168] "Request Body" body=""
	I1213 18:36:26.580862   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:26.581247   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:26.581300   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:26.900727   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:26.960607   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:26.960668   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:26.960687   38829 retry.go:31] will retry after 15.397640826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:27.080883   38829 type.go:168] "Request Body" body=""
	I1213 18:36:27.080957   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:27.081297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:27.580637   38829 type.go:168] "Request Body" body=""
	I1213 18:36:27.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:27.580956   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:28.080641   38829 type.go:168] "Request Body" body=""
	I1213 18:36:28.080752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:28.081081   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:28.580963   38829 type.go:168] "Request Body" body=""
	I1213 18:36:28.581049   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:28.581366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:28.581418   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:29.081265   38829 type.go:168] "Request Body" body=""
	I1213 18:36:29.081330   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:29.081585   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:29.581341   38829 type.go:168] "Request Body" body=""
	I1213 18:36:29.581414   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:29.581724   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:30.083283   38829 type.go:168] "Request Body" body=""
	I1213 18:36:30.083370   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:30.083708   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:30.581559   38829 type.go:168] "Request Body" body=""
	I1213 18:36:30.581633   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:30.581902   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:30.581946   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:31.081665   38829 type.go:168] "Request Body" body=""
	I1213 18:36:31.081736   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:31.082102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:31.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:36:31.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:31.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:32.080588   38829 type.go:168] "Request Body" body=""
	I1213 18:36:32.080654   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:32.080909   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:32.581657   38829 type.go:168] "Request Body" body=""
	I1213 18:36:32.581734   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:32.582056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:32.582116   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:33.080787   38829 type.go:168] "Request Body" body=""
	I1213 18:36:33.080867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:33.081206   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:33.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:36:33.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:33.580998   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:34.080961   38829 type.go:168] "Request Body" body=""
	I1213 18:36:34.081065   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:34.081433   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:34.581228   38829 type.go:168] "Request Body" body=""
	I1213 18:36:34.581300   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:34.581636   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:35.081408   38829 type.go:168] "Request Body" body=""
	I1213 18:36:35.081478   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:35.081747   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:35.081790   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:35.581492   38829 type.go:168] "Request Body" body=""
	I1213 18:36:35.581568   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:35.581859   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:36.081553   38829 type.go:168] "Request Body" body=""
	I1213 18:36:36.081623   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:36.081928   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:36.581632   38829 type.go:168] "Request Body" body=""
	I1213 18:36:36.581711   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:36.582018   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:37.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:36:37.080804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:37.081189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:37.580917   38829 type.go:168] "Request Body" body=""
	I1213 18:36:37.580993   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:37.581352   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:37.581446   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:38.080688   38829 type.go:168] "Request Body" body=""
	I1213 18:36:38.080770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:38.081101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:38.287495   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:38.357240   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:38.360822   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:38.360853   38829 retry.go:31] will retry after 30.28485436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:38.581302   38829 type.go:168] "Request Body" body=""
	I1213 18:36:38.581374   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:38.581695   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:39.081218   38829 type.go:168] "Request Body" body=""
	I1213 18:36:39.081295   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:39.081664   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:39.581465   38829 type.go:168] "Request Body" body=""
	I1213 18:36:39.581533   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:39.581794   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:39.581852   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:40.081640   38829 type.go:168] "Request Body" body=""
	I1213 18:36:40.081724   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:40.082071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:40.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:36:40.580788   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:40.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:41.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:36:41.080801   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:41.081086   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:41.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:36:41.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:41.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:42.080831   38829 type.go:168] "Request Body" body=""
	I1213 18:36:42.080909   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:42.081302   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:42.081363   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:42.358603   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:42.430743   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:42.430803   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:42.430822   38829 retry.go:31] will retry after 12.093455046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:42.581106   38829 type.go:168] "Request Body" body=""
	I1213 18:36:42.581178   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:42.581444   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:43.081272   38829 type.go:168] "Request Body" body=""
	I1213 18:36:43.081354   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:43.081648   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:43.580658   38829 type.go:168] "Request Body" body=""
	I1213 18:36:43.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:43.581055   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:44.080685   38829 type.go:168] "Request Body" body=""
	I1213 18:36:44.080795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:44.081152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:44.580685   38829 type.go:168] "Request Body" body=""
	I1213 18:36:44.580759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:44.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:44.581161   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:45.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:36:45.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:45.081226   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:45.581071   38829 type.go:168] "Request Body" body=""
	I1213 18:36:45.581137   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:45.581415   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:46.081136   38829 type.go:168] "Request Body" body=""
	I1213 18:36:46.081217   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:46.081567   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:46.581397   38829 type.go:168] "Request Body" body=""
	I1213 18:36:46.581468   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:46.581797   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:46.581852   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:47.081586   38829 type.go:168] "Request Body" body=""
	I1213 18:36:47.081660   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:47.081917   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:47.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:36:47.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:47.581109   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:48.080824   38829 type.go:168] "Request Body" body=""
	I1213 18:36:48.080903   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:48.081209   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:48.581175   38829 type.go:168] "Request Body" body=""
	I1213 18:36:48.581241   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:48.581504   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:49.081596   38829 type.go:168] "Request Body" body=""
	I1213 18:36:49.081669   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:49.082029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:49.082084   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:49.580622   38829 type.go:168] "Request Body" body=""
	I1213 18:36:49.580704   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:49.581055   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:50.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:36:50.080823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:50.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:50.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:36:50.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:50.581174   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:51.080882   38829 type.go:168] "Request Body" body=""
	I1213 18:36:51.080963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:51.081341   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:51.580687   38829 type.go:168] "Request Body" body=""
	I1213 18:36:51.580761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:51.581057   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:51.581110   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:52.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:36:52.080817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:52.081192   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:52.580893   38829 type.go:168] "Request Body" body=""
	I1213 18:36:52.580986   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:52.581347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:53.080709   38829 type.go:168] "Request Body" body=""
	I1213 18:36:53.080779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:53.081063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:53.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:36:53.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:53.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:53.581240   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:54.081104   38829 type.go:168] "Request Body" body=""
	I1213 18:36:54.081173   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:54.081470   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:54.525326   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:54.580832   38829 type.go:168] "Request Body" body=""
	I1213 18:36:54.580898   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:54.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:54.600652   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:54.600694   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:54.600713   38829 retry.go:31] will retry after 41.212755678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:55.081498   38829 type.go:168] "Request Body" body=""
	I1213 18:36:55.081571   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:55.081915   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:55.580632   38829 type.go:168] "Request Body" body=""
	I1213 18:36:55.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:55.581066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:56.080716   38829 type.go:168] "Request Body" body=""
	I1213 18:36:56.080780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:56.081078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:56.081124   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:56.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:36:56.580847   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:56.581215   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:57.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:36:57.080904   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:57.081246   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:57.580702   38829 type.go:168] "Request Body" body=""
	I1213 18:36:57.580781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:57.581095   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:58.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:36:58.080815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:58.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:58.081230   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:58.580804   38829 type.go:168] "Request Body" body=""
	I1213 18:36:58.580886   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:58.581230   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:59.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:36:59.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:59.081167   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:59.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:36:59.580848   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:59.581262   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:00.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:37:00.081091   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:00.081411   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:00.081460   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:00.580690   38829 type.go:168] "Request Body" body=""
	I1213 18:37:00.580766   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:00.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:01.080673   38829 type.go:168] "Request Body" body=""
	I1213 18:37:01.080760   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:01.081112   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:01.580720   38829 type.go:168] "Request Body" body=""
	I1213 18:37:01.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:01.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:02.080753   38829 type.go:168] "Request Body" body=""
	I1213 18:37:02.080821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:02.081110   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:02.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:37:02.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:02.581155   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:02.581205   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:03.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:37:03.080823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:03.081153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:03.580615   38829 type.go:168] "Request Body" body=""
	I1213 18:37:03.580691   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:03.580974   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:04.080845   38829 type.go:168] "Request Body" body=""
	I1213 18:37:04.080916   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:04.081330   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:04.580902   38829 type.go:168] "Request Body" body=""
	I1213 18:37:04.581002   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:04.581380   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:04.581437   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:05.080788   38829 type.go:168] "Request Body" body=""
	I1213 18:37:05.080867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:05.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:05.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:37:05.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:05.581178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:06.080721   38829 type.go:168] "Request Body" body=""
	I1213 18:37:06.080796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:06.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:06.580658   38829 type.go:168] "Request Body" body=""
	I1213 18:37:06.580727   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:06.581063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:07.080796   38829 type.go:168] "Request Body" body=""
	I1213 18:37:07.080883   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:07.081219   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:07.081280   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:07.580756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:07.580835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:07.581166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.080678   38829 type.go:168] "Request Body" body=""
	I1213 18:37:08.080757   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:08.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.580840   38829 type.go:168] "Request Body" body=""
	I1213 18:37:08.580922   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:08.581286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.646539   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:37:08.707161   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:08.707197   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:37:08.707216   38829 retry.go:31] will retry after 43.904706278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:37:09.080730   38829 type.go:168] "Request Body" body=""
	I1213 18:37:09.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:09.081148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:09.580688   38829 type.go:168] "Request Body" body=""
	I1213 18:37:09.580756   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:09.581080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:09.581129   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:10.080738   38829 type.go:168] "Request Body" body=""
	I1213 18:37:10.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:10.081184   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:10.580752   38829 type.go:168] "Request Body" body=""
	I1213 18:37:10.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:10.581212   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:11.080819   38829 type.go:168] "Request Body" body=""
	I1213 18:37:11.080905   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:11.081275   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:11.580750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:11.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:11.581167   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:11.581218   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:12.080976   38829 type.go:168] "Request Body" body=""
	I1213 18:37:12.081075   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:12.081413   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:12.581163   38829 type.go:168] "Request Body" body=""
	I1213 18:37:12.581239   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:12.581504   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:13.081350   38829 type.go:168] "Request Body" body=""
	I1213 18:37:13.081422   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:13.081759   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:13.581540   38829 type.go:168] "Request Body" body=""
	I1213 18:37:13.581621   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:13.581958   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:13.582012   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:14.080637   38829 type.go:168] "Request Body" body=""
	I1213 18:37:14.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:14.081037   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:14.580751   38829 type.go:168] "Request Body" body=""
	I1213 18:37:14.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:14.581126   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:15.080809   38829 type.go:168] "Request Body" body=""
	I1213 18:37:15.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:15.081289   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:15.580701   38829 type.go:168] "Request Body" body=""
	I1213 18:37:15.580784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:15.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:16.080844   38829 type.go:168] "Request Body" body=""
	I1213 18:37:16.080922   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:16.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:16.081285   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:16.580898   38829 type.go:168] "Request Body" body=""
	I1213 18:37:16.581034   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:16.581399   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:17.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:17.080737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:17.080990   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:17.580692   38829 type.go:168] "Request Body" body=""
	I1213 18:37:17.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:17.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:18.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:18.080868   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:18.081221   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:18.581194   38829 type.go:168] "Request Body" body=""
	I1213 18:37:18.581282   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:18.581589   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:18.581661   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:19.080720   38829 type.go:168] "Request Body" body=""
	I1213 18:37:19.080794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:19.081153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:19.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:37:19.580807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:19.581139   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:20.080683   38829 type.go:168] "Request Body" body=""
	I1213 18:37:20.080783   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:20.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:20.580699   38829 type.go:168] "Request Body" body=""
	I1213 18:37:20.580768   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:20.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:21.080704   38829 type.go:168] "Request Body" body=""
	I1213 18:37:21.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:21.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:21.081200   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:21.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:37:21.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:21.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:22.080770   38829 type.go:168] "Request Body" body=""
	I1213 18:37:22.080878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:22.081249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:22.580823   38829 type.go:168] "Request Body" body=""
	I1213 18:37:22.580919   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:22.581227   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:23.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:37:23.080740   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:23.081069   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:23.580725   38829 type.go:168] "Request Body" body=""
	I1213 18:37:23.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:23.581144   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:23.581194   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:24.081109   38829 type.go:168] "Request Body" body=""
	I1213 18:37:24.081180   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:24.081522   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:24.581618   38829 type.go:168] "Request Body" body=""
	I1213 18:37:24.581687   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:24.582010   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:25.080756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:25.080839   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:25.081197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:25.580943   38829 type.go:168] "Request Body" body=""
	I1213 18:37:25.581038   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:25.581354   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:25.581416   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:26.080723   38829 type.go:168] "Request Body" body=""
	I1213 18:37:26.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:26.081227   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:26.580735   38829 type.go:168] "Request Body" body=""
	I1213 18:37:26.580817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:26.581160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:27.080700   38829 type.go:168] "Request Body" body=""
	I1213 18:37:27.080784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:27.081126   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:27.580667   38829 type.go:168] "Request Body" body=""
	I1213 18:37:27.580751   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:27.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:28.080604   38829 type.go:168] "Request Body" body=""
	I1213 18:37:28.080698   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:28.081045   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:28.081097   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:28.580817   38829 type.go:168] "Request Body" body=""
	I1213 18:37:28.580906   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:28.581222   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:29.080796   38829 type.go:168] "Request Body" body=""
	I1213 18:37:29.080873   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:29.081151   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:29.580777   38829 type.go:168] "Request Body" body=""
	I1213 18:37:29.580870   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:29.581199   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:30.080803   38829 type.go:168] "Request Body" body=""
	I1213 18:37:30.080884   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:30.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:30.081287   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:30.580672   38829 type.go:168] "Request Body" body=""
	I1213 18:37:30.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:30.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:31.081506   38829 type.go:168] "Request Body" body=""
	I1213 18:37:31.081581   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:31.081922   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:31.580645   38829 type.go:168] "Request Body" body=""
	I1213 18:37:31.580718   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:31.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:32.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:32.080783   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:32.081114   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:32.580825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:32.580936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:32.581248   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:32.581295   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:33.080746   38829 type.go:168] "Request Body" body=""
	I1213 18:37:33.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:33.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:33.580676   38829 type.go:168] "Request Body" body=""
	I1213 18:37:33.580750   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:33.581029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:34.081646   38829 type.go:168] "Request Body" body=""
	I1213 18:37:34.081715   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:34.082009   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:34.580682   38829 type.go:168] "Request Body" body=""
	I1213 18:37:34.580780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:34.581134   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:35.080825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:35.080895   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:35.081246   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:35.081298   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:35.580940   38829 type.go:168] "Request Body" body=""
	I1213 18:37:35.581051   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:35.581350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:35.813701   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:37:35.887144   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:35.887179   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:35.887279   38829 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 18:37:36.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:36.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:36.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:36.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:37:36.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:36.581058   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:37.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:37:37.080814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:37.081161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:37.580851   38829 type.go:168] "Request Body" body=""
	I1213 18:37:37.580926   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:37.581239   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:37.581288   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:38.080774   38829 type.go:168] "Request Body" body=""
	I1213 18:37:38.080865   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:38.081305   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:38.581237   38829 type.go:168] "Request Body" body=""
	I1213 18:37:38.581321   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:38.581645   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:39.081533   38829 type.go:168] "Request Body" body=""
	I1213 18:37:39.081612   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:39.081897   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:39.581503   38829 type.go:168] "Request Body" body=""
	I1213 18:37:39.581567   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:39.581828   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:39.581866   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:40.081636   38829 type.go:168] "Request Body" body=""
	I1213 18:37:40.081710   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:40.082035   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:40.580686   38829 type.go:168] "Request Body" body=""
	I1213 18:37:40.580764   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:40.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:41.080659   38829 type.go:168] "Request Body" body=""
	I1213 18:37:41.080744   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:41.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:41.580856   38829 type.go:168] "Request Body" body=""
	I1213 18:37:41.580929   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:41.581268   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:42.080912   38829 type.go:168] "Request Body" body=""
	I1213 18:37:42.081054   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:42.081405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:42.081473   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:42.581188   38829 type.go:168] "Request Body" body=""
	I1213 18:37:42.581268   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:42.581539   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:43.081397   38829 type.go:168] "Request Body" body=""
	I1213 18:37:43.081474   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:43.081823   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:43.581624   38829 type.go:168] "Request Body" body=""
	I1213 18:37:43.581704   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:43.582019   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:44.081168   38829 type.go:168] "Request Body" body=""
	I1213 18:37:44.081243   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:44.081539   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:44.081581   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:44.581405   38829 type.go:168] "Request Body" body=""
	I1213 18:37:44.581481   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:44.581805   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:45.081836   38829 type.go:168] "Request Body" body=""
	I1213 18:37:45.081938   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:45.082358   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:45.580699   38829 type.go:168] "Request Body" body=""
	I1213 18:37:45.580773   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:45.581090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:46.080825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:46.080898   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:46.081231   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:46.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:37:46.580818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:46.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:46.581235   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:47.080684   38829 type.go:168] "Request Body" body=""
	I1213 18:37:47.080759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:47.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:47.580848   38829 type.go:168] "Request Body" body=""
	I1213 18:37:47.580921   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:47.581277   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:48.080712   38829 type.go:168] "Request Body" body=""
	I1213 18:37:48.080804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:48.081135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:48.580811   38829 type.go:168] "Request Body" body=""
	I1213 18:37:48.580882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:48.581154   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:49.081058   38829 type.go:168] "Request Body" body=""
	I1213 18:37:49.081150   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:49.081477   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:49.081542   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:49.581293   38829 type.go:168] "Request Body" body=""
	I1213 18:37:49.581370   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:49.581713   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:50.081496   38829 type.go:168] "Request Body" body=""
	I1213 18:37:50.081562   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:50.081847   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:50.581629   38829 type.go:168] "Request Body" body=""
	I1213 18:37:50.581706   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:50.582071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:51.080700   38829 type.go:168] "Request Body" body=""
	I1213 18:37:51.080790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:51.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:51.580683   38829 type.go:168] "Request Body" body=""
	I1213 18:37:51.580754   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:51.581047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:51.581094   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:52.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:37:52.080787   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:52.081175   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:52.580775   38829 type.go:168] "Request Body" body=""
	I1213 18:37:52.580867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:52.581254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:52.612466   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:37:52.672905   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:52.677070   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:52.677165   38829 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 18:37:52.680309   38829 out.go:179] * Enabled addons: 
	I1213 18:37:52.684021   38829 addons.go:530] duration metric: took 1m54.470472162s for enable addons: enabled=[]
	I1213 18:37:53.081534   38829 type.go:168] "Request Body" body=""
	I1213 18:37:53.081600   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:53.081904   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:53.580635   38829 type.go:168] "Request Body" body=""
	I1213 18:37:53.580711   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:53.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:54.080643   38829 type.go:168] "Request Body" body=""
	I1213 18:37:54.080739   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:54.082029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1213 18:37:54.082091   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:54.581623   38829 type.go:168] "Request Body" body=""
	I1213 18:37:54.581698   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:54.581957   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:55.080687   38829 type.go:168] "Request Body" body=""
	I1213 18:37:55.080780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:55.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:55.580756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:55.580828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:55.581197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:56.080640   38829 type.go:168] "Request Body" body=""
	I1213 18:37:56.080714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:56.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:56.580613   38829 type.go:168] "Request Body" body=""
	I1213 18:37:56.580689   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:56.581045   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:56.581101   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:57.080597   38829 type.go:168] "Request Body" body=""
	I1213 18:37:57.080691   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:57.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:57.580930   38829 type.go:168] "Request Body" body=""
	I1213 18:37:57.581038   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:57.585714   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 18:37:58.081512   38829 type.go:168] "Request Body" body=""
	I1213 18:37:58.081591   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:58.081945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:58.580703   38829 type.go:168] "Request Body" body=""
	I1213 18:37:58.580778   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:58.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:58.581214   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:59.081515   38829 type.go:168] "Request Body" body=""
	I1213 18:37:59.081606   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:59.081931   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:59.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:59.580732   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:59.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:00.080803   38829 type.go:168] "Request Body" body=""
	I1213 18:38:00.080888   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:00.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:00.581619   38829 type.go:168] "Request Body" body=""
	I1213 18:38:00.581690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:00.582027   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:00.582084   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:01.080751   38829 type.go:168] "Request Body" body=""
	I1213 18:38:01.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:01.081194   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:01.580724   38829 type.go:168] "Request Body" body=""
	I1213 18:38:01.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:01.581152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:02.080668   38829 type.go:168] "Request Body" body=""
	I1213 18:38:02.080746   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:02.081102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:02.580776   38829 type.go:168] "Request Body" body=""
	I1213 18:38:02.580850   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:02.581187   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:03.080936   38829 type.go:168] "Request Body" body=""
	I1213 18:38:03.081031   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:03.081349   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:03.081405   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:03.580669   38829 type.go:168] "Request Body" body=""
	I1213 18:38:03.580767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:03.581056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:04.080818   38829 type.go:168] "Request Body" body=""
	I1213 18:38:04.080899   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:04.081235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:04.580930   38829 type.go:168] "Request Body" body=""
	I1213 18:38:04.581025   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:04.581369   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:05.080659   38829 type.go:168] "Request Body" body=""
	I1213 18:38:05.080743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:05.081076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:05.580757   38829 type.go:168] "Request Body" body=""
	I1213 18:38:05.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:05.581176   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:05.581227   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:06.080773   38829 type.go:168] "Request Body" body=""
	I1213 18:38:06.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:06.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:06.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:38:06.580751   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:06.581040   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:07.080776   38829 type.go:168] "Request Body" body=""
	I1213 18:38:07.080848   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:07.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:07.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:07.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:07.581160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:08.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:38:08.080849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:08.081161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:08.081226   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:08.580947   38829 type.go:168] "Request Body" body=""
	I1213 18:38:08.581044   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:08.581405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:09.081557   38829 type.go:168] "Request Body" body=""
	I1213 18:38:09.081630   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:09.081955   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:09.580701   38829 type.go:168] "Request Body" body=""
	I1213 18:38:09.580777   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:09.581100   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:10.080747   38829 type.go:168] "Request Body" body=""
	I1213 18:38:10.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:10.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:10.081288   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:10.580771   38829 type.go:168] "Request Body" body=""
	I1213 18:38:10.580886   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:10.581218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:11.080922   38829 type.go:168] "Request Body" body=""
	I1213 18:38:11.080992   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:11.081274   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:11.581973   38829 type.go:168] "Request Body" body=""
	I1213 18:38:11.582052   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:11.582377   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:12.081104   38829 type.go:168] "Request Body" body=""
	I1213 18:38:12.081179   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:12.081532   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:12.081585   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:12.581355   38829 type.go:168] "Request Body" body=""
	I1213 18:38:12.581430   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:12.581762   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:13.081529   38829 type.go:168] "Request Body" body=""
	I1213 18:38:13.081604   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:13.081921   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:13.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:38:13.580716   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:13.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:14.081616   38829 type.go:168] "Request Body" body=""
	I1213 18:38:14.081703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:14.082037   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:14.082090   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:14.580727   38829 type.go:168] "Request Body" body=""
	I1213 18:38:14.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:14.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:15.080903   38829 type.go:168] "Request Body" body=""
	I1213 18:38:15.080982   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:15.081338   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:15.581041   38829 type.go:168] "Request Body" body=""
	I1213 18:38:15.581119   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:15.581474   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:16.081265   38829 type.go:168] "Request Body" body=""
	I1213 18:38:16.081338   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:16.081665   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:16.581493   38829 type.go:168] "Request Body" body=""
	I1213 18:38:16.581589   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:16.581945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:16.581999   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:17.080642   38829 type.go:168] "Request Body" body=""
	I1213 18:38:17.080713   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:17.080986   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:17.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:38:17.580796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:17.581138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:18.080868   38829 type.go:168] "Request Body" body=""
	I1213 18:38:18.080948   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:18.081331   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:18.581194   38829 type.go:168] "Request Body" body=""
	I1213 18:38:18.581268   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:18.581529   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:19.081522   38829 type.go:168] "Request Body" body=""
	I1213 18:38:19.081598   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:19.081945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:19.082001   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:19.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:38:19.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:19.581171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:20.080873   38829 type.go:168] "Request Body" body=""
	I1213 18:38:20.080948   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:20.081259   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:20.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:38:20.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:20.581178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:21.080749   38829 type.go:168] "Request Body" body=""
	I1213 18:38:21.080849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:21.081219   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:21.580655   38829 type.go:168] "Request Body" body=""
	I1213 18:38:21.580730   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:21.581101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:21.581180   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:22.080740   38829 type.go:168] "Request Body" body=""
	I1213 18:38:22.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:22.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:22.580922   38829 type.go:168] "Request Body" body=""
	I1213 18:38:22.581020   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:22.581389   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:23.080725   38829 type.go:168] "Request Body" body=""
	I1213 18:38:23.080802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:23.081145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:23.580880   38829 type.go:168] "Request Body" body=""
	I1213 18:38:23.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:23.581338   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:23.581392   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:24.081664   38829 type.go:168] "Request Body" body=""
	I1213 18:38:24.081759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:24.082117   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:24.580825   38829 type.go:168] "Request Body" body=""
	I1213 18:38:24.580901   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:24.581233   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:25.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:25.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:25.081203   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:25.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:38:25.580807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:25.581142   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:26.080689   38829 type.go:168] "Request Body" body=""
	I1213 18:38:26.080779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:26.081103   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:26.081156   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:26.580750   38829 type.go:168] "Request Body" body=""
	I1213 18:38:26.580831   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:26.581177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:27.080736   38829 type.go:168] "Request Body" body=""
	I1213 18:38:27.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:27.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:27.580696   38829 type.go:168] "Request Body" body=""
	I1213 18:38:27.580770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:27.581094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:28.080768   38829 type.go:168] "Request Body" body=""
	I1213 18:38:28.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:28.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:28.081197   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:28.581180   38829 type.go:168] "Request Body" body=""
	I1213 18:38:28.581274   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:28.581646   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:29.080821   38829 type.go:168] "Request Body" body=""
	I1213 18:38:29.080892   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:29.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:29.580951   38829 type.go:168] "Request Body" body=""
	I1213 18:38:29.581053   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:29.581390   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:30.080799   38829 type.go:168] "Request Body" body=""
	I1213 18:38:30.080882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:30.081350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:30.081432   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:30.580706   38829 type.go:168] "Request Body" body=""
	I1213 18:38:30.580834   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:30.581124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:31.080774   38829 type.go:168] "Request Body" body=""
	I1213 18:38:31.080864   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:31.081259   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:31.580984   38829 type.go:168] "Request Body" body=""
	I1213 18:38:31.581082   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:31.581450   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:32.080667   38829 type.go:168] "Request Body" body=""
	I1213 18:38:32.080743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:32.081034   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:32.580743   38829 type.go:168] "Request Body" body=""
	I1213 18:38:32.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:32.581200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:32.581255   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:33.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:38:33.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:33.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:33.580725   38829 type.go:168] "Request Body" body=""
	I1213 18:38:33.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:33.581164   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:34.081257   38829 type.go:168] "Request Body" body=""
	I1213 18:38:34.081337   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:34.081668   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:34.581504   38829 type.go:168] "Request Body" body=""
	I1213 18:38:34.581582   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:34.581919   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:34.581974   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:35.080651   38829 type.go:168] "Request Body" body=""
	I1213 18:38:35.080731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:35.081024   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:35.580713   38829 type.go:168] "Request Body" body=""
	I1213 18:38:35.580792   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:35.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:36.080919   38829 type.go:168] "Request Body" body=""
	I1213 18:38:36.080998   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:36.081335   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:36.580681   38829 type.go:168] "Request Body" body=""
	I1213 18:38:36.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:36.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:37.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:38:37.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:37.081165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:37.081218   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:37.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:38:37.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:37.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:38.080691   38829 type.go:168] "Request Body" body=""
	I1213 18:38:38.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:38.081186   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:38.581125   38829 type.go:168] "Request Body" body=""
	I1213 18:38:38.581202   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:38.581601   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:39.081372   38829 type.go:168] "Request Body" body=""
	I1213 18:38:39.081450   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:39.081746   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:39.081795   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:39.581476   38829 type.go:168] "Request Body" body=""
	I1213 18:38:39.581574   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:39.581834   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:40.080652   38829 type.go:168] "Request Body" body=""
	I1213 18:38:40.080736   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:40.081070   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:40.580762   38829 type.go:168] "Request Body" body=""
	I1213 18:38:40.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:40.581170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:41.080790   38829 type.go:168] "Request Body" body=""
	I1213 18:38:41.080859   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:41.081138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:41.580736   38829 type.go:168] "Request Body" body=""
	I1213 18:38:41.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:41.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:41.581213   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:42.081232   38829 type.go:168] "Request Body" body=""
	I1213 18:38:42.081358   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:42.081865   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:42.580689   38829 type.go:168] "Request Body" body=""
	I1213 18:38:42.580771   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:42.581121   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:43.080823   38829 type.go:168] "Request Body" body=""
	I1213 18:38:43.080907   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:43.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:43.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:38:43.580836   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:43.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:44.081575   38829 type.go:168] "Request Body" body=""
	I1213 18:38:44.081651   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:44.081974   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:44.082018   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:44.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:38:44.580850   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:44.581196   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:45.080840   38829 type.go:168] "Request Body" body=""
	I1213 18:38:45.080920   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:45.081286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:45.580954   38829 type.go:168] "Request Body" body=""
	I1213 18:38:45.581055   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:45.581346   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:46.081059   38829 type.go:168] "Request Body" body=""
	I1213 18:38:46.081132   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:46.081421   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:46.581118   38829 type.go:168] "Request Body" body=""
	I1213 18:38:46.581200   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:46.581535   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:46.581590   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:47.081106   38829 type.go:168] "Request Body" body=""
	I1213 18:38:47.081224   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:47.081480   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:47.581264   38829 type.go:168] "Request Body" body=""
	I1213 18:38:47.581336   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:47.581677   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:48.081348   38829 type.go:168] "Request Body" body=""
	I1213 18:38:48.081420   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:48.081786   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:48.580712   38829 type.go:168] "Request Body" body=""
	I1213 18:38:48.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:48.581132   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:49.081267   38829 type.go:168] "Request Body" body=""
	I1213 18:38:49.081338   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:49.081661   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:49.081719   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:49.581307   38829 type.go:168] "Request Body" body=""
	I1213 18:38:49.581390   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:49.581723   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:50.081491   38829 type.go:168] "Request Body" body=""
	I1213 18:38:50.081558   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:50.081836   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:50.581617   38829 type.go:168] "Request Body" body=""
	I1213 18:38:50.581690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:50.582006   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:51.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:51.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:51.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:51.580635   38829 type.go:168] "Request Body" body=""
	I1213 18:38:51.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:51.581040   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:51.581092   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:52.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:52.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:52.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:52.580897   38829 type.go:168] "Request Body" body=""
	I1213 18:38:52.580975   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:52.581319   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:53.081002   38829 type.go:168] "Request Body" body=""
	I1213 18:38:53.081090   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:53.081366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:53.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:38:53.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:53.581210   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:53.581264   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:54.081117   38829 type.go:168] "Request Body" body=""
	I1213 18:38:54.081197   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:54.081547   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:54.581298   38829 type.go:168] "Request Body" body=""
	I1213 18:38:54.581371   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:54.581643   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:55.081403   38829 type.go:168] "Request Body" body=""
	I1213 18:38:55.081482   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:55.081842   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:55.581455   38829 type.go:168] "Request Body" body=""
	I1213 18:38:55.581534   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:55.581851   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:55.581906   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:56.080602   38829 type.go:168] "Request Body" body=""
	I1213 18:38:56.080680   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:56.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:56.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:56.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:56.581197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:57.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:38:57.080844   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:57.081204   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:57.580625   38829 type.go:168] "Request Body" body=""
	I1213 18:38:57.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:57.580967   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:58.080697   38829 type.go:168] "Request Body" body=""
	I1213 18:38:58.080767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:58.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:58.081121   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:58.580746   38829 type.go:168] "Request Body" body=""
	I1213 18:38:58.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:58.581193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:59.080619   38829 type.go:168] "Request Body" body=""
	I1213 18:38:59.080690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:59.080957   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:59.580697   38829 type.go:168] "Request Body" body=""
	I1213 18:38:59.580775   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:59.581075   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:00.080781   38829 type.go:168] "Request Body" body=""
	I1213 18:39:00.080864   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:00.081214   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:00.081263   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:00.580868   38829 type.go:168] "Request Body" body=""
	I1213 18:39:00.580959   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:00.581261   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:01.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:39:01.080795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:01.081160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:01.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:01.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:01.581212   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:02.080885   38829 type.go:168] "Request Body" body=""
	I1213 18:39:02.080961   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:02.081256   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:02.081306   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:02.580741   38829 type.go:168] "Request Body" body=""
	I1213 18:39:02.580818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:02.581177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:03.080736   38829 type.go:168] "Request Body" body=""
	I1213 18:39:03.080810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:03.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:03.580700   38829 type.go:168] "Request Body" body=""
	I1213 18:39:03.580773   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:03.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:04.080632   38829 type.go:168] "Request Body" body=""
	I1213 18:39:04.080714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:04.081077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:04.580778   38829 type.go:168] "Request Body" body=""
	I1213 18:39:04.580863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:04.581243   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:04.581303   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:05.080687   38829 type.go:168] "Request Body" body=""
	I1213 18:39:05.080765   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:05.081059   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:05.580796   38829 type.go:168] "Request Body" body=""
	I1213 18:39:05.580872   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:05.581215   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:06.080727   38829 type.go:168] "Request Body" body=""
	I1213 18:39:06.080803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:06.081158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:06.580837   38829 type.go:168] "Request Body" body=""
	I1213 18:39:06.580917   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:06.581202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:07.080725   38829 type.go:168] "Request Body" body=""
	I1213 18:39:07.080808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:07.081164   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:07.081214   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:07.580716   38829 type.go:168] "Request Body" body=""
	I1213 18:39:07.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:07.581129   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:08.080858   38829 type.go:168] "Request Body" body=""
	I1213 18:39:08.080931   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:08.081213   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:08.581137   38829 type.go:168] "Request Body" body=""
	I1213 18:39:08.581207   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:08.581513   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:09.081065   38829 type.go:168] "Request Body" body=""
	I1213 18:39:09.081139   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:09.081514   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:09.081581   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:09.581276   38829 type.go:168] "Request Body" body=""
	I1213 18:39:09.581342   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:09.581644   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:10.081407   38829 type.go:168] "Request Body" body=""
	I1213 18:39:10.081483   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:10.081851   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:10.581496   38829 type.go:168] "Request Body" body=""
	I1213 18:39:10.581567   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:10.581887   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:11.080629   38829 type.go:168] "Request Body" body=""
	I1213 18:39:11.080701   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:11.081001   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:11.580726   38829 type.go:168] "Request Body" body=""
	I1213 18:39:11.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:11.581121   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:11.581171   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:12.080760   38829 type.go:168] "Request Body" body=""
	I1213 18:39:12.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:12.081152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:12.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:39:12.580744   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:12.581068   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:13.080734   38829 type.go:168] "Request Body" body=""
	I1213 18:39:13.080808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:13.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:13.580863   38829 type.go:168] "Request Body" body=""
	I1213 18:39:13.580937   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:13.581281   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:13.581332   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:14.081577   38829 type.go:168] "Request Body" body=""
	I1213 18:39:14.081653   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:14.081950   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:14.580638   38829 type.go:168] "Request Body" body=""
	I1213 18:39:14.580713   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:14.581046   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:15.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:39:15.080825   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:15.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:15.580864   38829 type.go:168] "Request Body" body=""
	I1213 18:39:15.580936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:15.581210   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:16.080732   38829 type.go:168] "Request Body" body=""
	I1213 18:39:16.080807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:16.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:16.081237   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:16.580894   38829 type.go:168] "Request Body" body=""
	I1213 18:39:16.580969   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:16.581301   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:17.080988   38829 type.go:168] "Request Body" body=""
	I1213 18:39:17.081089   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:17.081420   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:17.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:39:17.580844   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:17.581202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:18.080887   38829 type.go:168] "Request Body" body=""
	I1213 18:39:18.080962   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:18.081285   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:18.081330   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:18.581099   38829 type.go:168] "Request Body" body=""
	I1213 18:39:18.581170   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:18.581423   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:19.081384   38829 type.go:168] "Request Body" body=""
	I1213 18:39:19.081453   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:19.081768   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:19.581414   38829 type.go:168] "Request Body" body=""
	I1213 18:39:19.581490   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:19.581786   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:20.081602   38829 type.go:168] "Request Body" body=""
	I1213 18:39:20.081678   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:20.081965   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:20.082018   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:20.580679   38829 type.go:168] "Request Body" body=""
	I1213 18:39:20.580788   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:20.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:21.080703   38829 type.go:168] "Request Body" body=""
	I1213 18:39:21.080796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:21.081146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:21.580784   38829 type.go:168] "Request Body" body=""
	I1213 18:39:21.580863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:21.581224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:22.080782   38829 type.go:168] "Request Body" body=""
	I1213 18:39:22.080855   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:22.081300   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:22.580762   38829 type.go:168] "Request Body" body=""
	I1213 18:39:22.580835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:22.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:22.581194   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:23.080788   38829 type.go:168] "Request Body" body=""
	I1213 18:39:23.080860   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:23.081193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:23.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:39:23.580820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:23.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:24.081435   38829 type.go:168] "Request Body" body=""
	I1213 18:39:24.081530   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:24.081884   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:24.581587   38829 type.go:168] "Request Body" body=""
	I1213 18:39:24.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:24.581912   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:24.581951   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:25.080657   38829 type.go:168] "Request Body" body=""
	I1213 18:39:25.080734   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:25.081179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:25.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:39:25.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:25.581190   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:26.080869   38829 type.go:168] "Request Body" body=""
	I1213 18:39:26.080936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:26.081224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:26.580741   38829 type.go:168] "Request Body" body=""
	I1213 18:39:26.580814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:26.581148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:27.080703   38829 type.go:168] "Request Body" body=""
	I1213 18:39:27.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:27.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:27.081165   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:27.580724   38829 type.go:168] "Request Body" body=""
	I1213 18:39:27.580797   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:27.581139   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:28.080722   38829 type.go:168] "Request Body" body=""
	I1213 18:39:28.080793   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:28.081199   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:28.580834   38829 type.go:168] "Request Body" body=""
	I1213 18:39:28.580915   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:28.581280   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:29.081285   38829 type.go:168] "Request Body" body=""
	I1213 18:39:29.081351   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:29.081628   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:29.081672   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:29.581065   38829 type.go:168] "Request Body" body=""
	I1213 18:39:29.581140   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:29.581481   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:30.081344   38829 type.go:168] "Request Body" body=""
	I1213 18:39:30.081439   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:30.081896   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:30.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:39:30.580748   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:30.581066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:31.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:39:31.080834   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:31.081162   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:31.580866   38829 type.go:168] "Request Body" body=""
	I1213 18:39:31.580942   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:31.581337   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:31.581394   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:32.080782   38829 type.go:168] "Request Body" body=""
	I1213 18:39:32.080853   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:32.081134   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:32.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:32.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:32.581200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:33.080901   38829 type.go:168] "Request Body" body=""
	I1213 18:39:33.080972   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:33.081318   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:33.580802   38829 type.go:168] "Request Body" body=""
	I1213 18:39:33.580878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:33.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:34.080872   38829 type.go:168] "Request Body" body=""
	I1213 18:39:34.080943   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:34.081303   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:34.081358   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:34.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:39:34.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:34.581136   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:35.080815   38829 type.go:168] "Request Body" body=""
	I1213 18:39:35.080883   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:35.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:35.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:39:35.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:35.581133   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:36.080735   38829 type.go:168] "Request Body" body=""
	I1213 18:39:36.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:36.081172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:36.580859   38829 type.go:168] "Request Body" body=""
	I1213 18:39:36.580941   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:36.581223   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:36.581264   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:37.080720   38829 type.go:168] "Request Body" body=""
	I1213 18:39:37.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:37.081267   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:37.580761   38829 type.go:168] "Request Body" body=""
	I1213 18:39:37.580833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:37.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:38.080809   38829 type.go:168] "Request Body" body=""
	I1213 18:39:38.080881   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:38.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:38.581160   38829 type.go:168] "Request Body" body=""
	I1213 18:39:38.581229   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:38.581546   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:38.581608   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:39.081316   38829 type.go:168] "Request Body" body=""
	I1213 18:39:39.081387   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:39.081699   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:39.581307   38829 type.go:168] "Request Body" body=""
	I1213 18:39:39.581382   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:39.581710   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:40.081503   38829 type.go:168] "Request Body" body=""
	I1213 18:39:40.081578   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:40.081882   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:40.581632   38829 type.go:168] "Request Body" body=""
	I1213 18:39:40.581730   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:40.582090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:40.582139   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:41.080640   38829 type.go:168] "Request Body" body=""
	I1213 18:39:41.080710   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:41.081046   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:41.580670   38829 type.go:168] "Request Body" body=""
	I1213 18:39:41.580748   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:41.581076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:42.080797   38829 type.go:168] "Request Body" body=""
	I1213 18:39:42.080878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:42.081282   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:42.580711   38829 type.go:168] "Request Body" body=""
	I1213 18:39:42.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:42.581132   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:43.080747   38829 type.go:168] "Request Body" body=""
	I1213 18:39:43.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:43.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:43.081283   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:43.580965   38829 type.go:168] "Request Body" body=""
	I1213 18:39:43.581057   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:43.581416   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:44.081437   38829 type.go:168] "Request Body" body=""
	I1213 18:39:44.081507   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:44.081776   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:44.581633   38829 type.go:168] "Request Body" body=""
	I1213 18:39:44.581707   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:44.582020   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:45.080770   38829 type.go:168] "Request Body" body=""
	I1213 18:39:45.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:45.081375   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:45.081434   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:45.581089   38829 type.go:168] "Request Body" body=""
	I1213 18:39:45.581158   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:45.581469   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:46.080755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:46.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:46.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:46.580794   38829 type.go:168] "Request Body" body=""
	I1213 18:39:46.580865   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:46.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:47.080689   38829 type.go:168] "Request Body" body=""
	I1213 18:39:47.080768   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:47.081094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:47.580669   38829 type.go:168] "Request Body" body=""
	I1213 18:39:47.580763   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:47.581109   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:47.581164   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:48.080848   38829 type.go:168] "Request Body" body=""
	I1213 18:39:48.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:48.081228   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:48.581237   38829 type.go:168] "Request Body" body=""
	I1213 18:39:48.581311   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:48.581637   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:49.081081   38829 type.go:168] "Request Body" body=""
	I1213 18:39:49.081164   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:49.081471   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:49.581258   38829 type.go:168] "Request Body" body=""
	I1213 18:39:49.581336   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:49.581617   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:49.581664   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:50.081346   38829 type.go:168] "Request Body" body=""
	I1213 18:39:50.081416   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:50.081693   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:50.581552   38829 type.go:168] "Request Body" body=""
	I1213 18:39:50.581621   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:50.581942   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:51.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:39:51.080806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:51.081235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:51.580885   38829 type.go:168] "Request Body" body=""
	I1213 18:39:51.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:51.581315   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:52.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:39:52.080811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:52.081193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:52.081249   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:52.580704   38829 type.go:168] "Request Body" body=""
	I1213 18:39:52.580784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:52.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:53.080692   38829 type.go:168] "Request Body" body=""
	I1213 18:39:53.080761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:53.081060   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:53.580744   38829 type.go:168] "Request Body" body=""
	I1213 18:39:53.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:53.581232   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:54.081089   38829 type.go:168] "Request Body" body=""
	I1213 18:39:54.081164   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:54.081658   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:54.081712   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:54.581346   38829 type.go:168] "Request Body" body=""
	I1213 18:39:54.581418   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:54.581673   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:55.081499   38829 type.go:168] "Request Body" body=""
	I1213 18:39:55.081596   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:55.081941   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:55.580685   38829 type.go:168] "Request Body" body=""
	I1213 18:39:55.580777   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:55.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:56.080674   38829 type.go:168] "Request Body" body=""
	I1213 18:39:56.080750   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:56.081047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:56.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:39:56.580778   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:56.581204   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:56.581262   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:57.080917   38829 type.go:168] "Request Body" body=""
	I1213 18:39:57.081002   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:57.081366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:57.580664   38829 type.go:168] "Request Body" body=""
	I1213 18:39:57.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:57.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:58.081028   38829 type.go:168] "Request Body" body=""
	I1213 18:39:58.081122   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:58.081478   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:58.581557   38829 type.go:168] "Request Body" body=""
	I1213 18:39:58.581639   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:58.582001   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:58.582075   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:59.081358   38829 type.go:168] "Request Body" body=""
	I1213 18:39:59.081453   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:59.081774   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:59.581595   38829 type.go:168] "Request Body" body=""
	I1213 18:39:59.581667   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:59.581967   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:00.080718   38829 type.go:168] "Request Body" body=""
	I1213 18:40:00.080803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:00.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:00.582760   38829 type.go:168] "Request Body" body=""
	I1213 18:40:00.582857   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:00.583187   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:00.583244   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:01.080684   38829 type.go:168] "Request Body" body=""
	I1213 18:40:01.080755   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:01.081087   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:01.580820   38829 type.go:168] "Request Body" body=""
	I1213 18:40:01.580895   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:01.581240   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:02.080921   38829 type.go:168] "Request Body" body=""
	I1213 18:40:02.080993   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:02.081270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:02.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:02.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:02.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:03.080880   38829 type.go:168] "Request Body" body=""
	I1213 18:40:03.080955   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:03.081306   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:03.081361   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:03.580996   38829 type.go:168] "Request Body" body=""
	I1213 18:40:03.581076   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:03.581335   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:04.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:04.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:04.081183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:04.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:04.580808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:04.581149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:05.080850   38829 type.go:168] "Request Body" body=""
	I1213 18:40:05.080927   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:05.081263   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:05.580963   38829 type.go:168] "Request Body" body=""
	I1213 18:40:05.581056   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:05.581401   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:05.581460   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:06.081245   38829 type.go:168] "Request Body" body=""
	I1213 18:40:06.081316   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:06.081669   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:06.581426   38829 type.go:168] "Request Body" body=""
	I1213 18:40:06.581509   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:06.581848   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:07.081645   38829 type.go:168] "Request Body" body=""
	I1213 18:40:07.081722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:07.082062   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:07.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:07.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:07.581162   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:08.080728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:08.080798   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:08.081088   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:08.081131   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:08.580917   38829 type.go:168] "Request Body" body=""
	I1213 18:40:08.580997   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:08.581369   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:09.081067   38829 type.go:168] "Request Body" body=""
	I1213 18:40:09.081141   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:09.081470   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:09.581192   38829 type.go:168] "Request Body" body=""
	I1213 18:40:09.581258   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:09.581523   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:10.081376   38829 type.go:168] "Request Body" body=""
	I1213 18:40:10.081454   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:10.081809   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:10.081865   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:10.581615   38829 type.go:168] "Request Body" body=""
	I1213 18:40:10.581696   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:10.582036   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:11.080690   38829 type.go:168] "Request Body" body=""
	I1213 18:40:11.080762   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:11.081125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:11.580814   38829 type.go:168] "Request Body" body=""
	I1213 18:40:11.580891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:11.581233   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:12.080745   38829 type.go:168] "Request Body" body=""
	I1213 18:40:12.080820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:12.081174   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:12.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:40:12.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:12.581118   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:12.581177   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:13.080870   38829 type.go:168] "Request Body" body=""
	I1213 18:40:13.080953   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:13.081298   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:13.580990   38829 type.go:168] "Request Body" body=""
	I1213 18:40:13.581130   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:13.581452   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:14.081563   38829 type.go:168] "Request Body" body=""
	I1213 18:40:14.081631   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:14.081949   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:14.580642   38829 type.go:168] "Request Body" body=""
	I1213 18:40:14.580724   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:14.581092   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:15.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:40:15.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:15.081138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:15.081197   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:15.580905   38829 type.go:168] "Request Body" body=""
	I1213 18:40:15.580977   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:15.581270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:16.080728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:16.080801   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:16.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:16.580745   38829 type.go:168] "Request Body" body=""
	I1213 18:40:16.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:16.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:17.080854   38829 type.go:168] "Request Body" body=""
	I1213 18:40:17.080925   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:17.081196   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:17.081236   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:17.580885   38829 type.go:168] "Request Body" body=""
	I1213 18:40:17.580960   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:17.581311   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:18.081048   38829 type.go:168] "Request Body" body=""
	I1213 18:40:18.081128   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:18.081456   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:18.581421   38829 type.go:168] "Request Body" body=""
	I1213 18:40:18.581495   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:18.581752   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:19.081269   38829 type.go:168] "Request Body" body=""
	I1213 18:40:19.081345   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:19.081667   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:19.081723   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:19.581465   38829 type.go:168] "Request Body" body=""
	I1213 18:40:19.581546   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:19.581834   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:20.081620   38829 type.go:168] "Request Body" body=""
	I1213 18:40:20.081707   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:20.082023   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:20.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:40:20.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:20.581185   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:21.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:40:21.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:21.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:21.580880   38829 type.go:168] "Request Body" body=""
	I1213 18:40:21.580954   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:21.581229   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:21.581273   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:22.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:40:22.080802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:22.081186   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:22.580892   38829 type.go:168] "Request Body" body=""
	I1213 18:40:22.580971   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:22.581314   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:23.080852   38829 type.go:168] "Request Body" body=""
	I1213 18:40:23.080921   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:23.081254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:23.580738   38829 type.go:168] "Request Body" body=""
	I1213 18:40:23.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:23.581213   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:24.080992   38829 type.go:168] "Request Body" body=""
	I1213 18:40:24.081086   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:24.081439   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:24.081493   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:24.581181   38829 type.go:168] "Request Body" body=""
	I1213 18:40:24.581254   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:24.581518   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:25.081519   38829 type.go:168] "Request Body" body=""
	I1213 18:40:25.081638   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:25.082066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:25.580956   38829 type.go:168] "Request Body" body=""
	I1213 18:40:25.581049   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:25.581403   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:26.081103   38829 type.go:168] "Request Body" body=""
	I1213 18:40:26.081188   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:26.081496   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:26.081544   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:26.581271   38829 type.go:168] "Request Body" body=""
	I1213 18:40:26.581346   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:26.581679   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:27.081463   38829 type.go:168] "Request Body" body=""
	I1213 18:40:27.081544   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:27.081845   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:27.581582   38829 type.go:168] "Request Body" body=""
	I1213 18:40:27.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:27.581970   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:28.080670   38829 type.go:168] "Request Body" body=""
	I1213 18:40:28.080746   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:28.081095   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:28.580759   38829 type.go:168] "Request Body" body=""
	I1213 18:40:28.580833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:28.581189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:28.581244   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:29.080966   38829 type.go:168] "Request Body" body=""
	I1213 18:40:29.081057   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:29.081325   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:29.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:29.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:29.581235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:30.080981   38829 type.go:168] "Request Body" body=""
	I1213 18:40:30.081106   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:30.081499   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:30.581288   38829 type.go:168] "Request Body" body=""
	I1213 18:40:30.581365   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:30.581686   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:30.581744   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:31.081563   38829 type.go:168] "Request Body" body=""
	I1213 18:40:31.081643   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:31.081985   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:31.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:40:31.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:31.581128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:32.080686   38829 type.go:168] "Request Body" body=""
	I1213 18:40:32.080759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:32.081089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:32.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:40:32.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:32.581153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:33.080697   38829 type.go:168] "Request Body" body=""
	I1213 18:40:33.080771   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:33.081078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:33.081125   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:33.580695   38829 type.go:168] "Request Body" body=""
	I1213 18:40:33.580776   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:33.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:34.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:40:34.080785   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:34.081116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:34.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:34.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:34.581135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:35.080858   38829 type.go:168] "Request Body" body=""
	I1213 18:40:35.080940   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:35.081258   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:35.081316   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:35.580736   38829 type.go:168] "Request Body" body=""
	I1213 18:40:35.580819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:35.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:36.080905   38829 type.go:168] "Request Body" body=""
	I1213 18:40:36.080982   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:36.081405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:36.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:40:36.580780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:36.581071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:37.080758   38829 type.go:168] "Request Body" body=""
	I1213 18:40:37.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:37.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:37.580742   38829 type.go:168] "Request Body" body=""
	I1213 18:40:37.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:37.581185   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:37.581240   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:38.080845   38829 type.go:168] "Request Body" body=""
	I1213 18:40:38.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:38.081284   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:38.580992   38829 type.go:168] "Request Body" body=""
	I1213 18:40:38.581079   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:38.581427   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:39.081037   38829 type.go:168] "Request Body" body=""
	I1213 18:40:39.081109   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:39.081425   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:39.580691   38829 type.go:168] "Request Body" body=""
	I1213 18:40:39.580779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:39.581096   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:40.080864   38829 type.go:168] "Request Body" body=""
	I1213 18:40:40.080952   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:40.081316   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:40.081370   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:40.581072   38829 type.go:168] "Request Body" body=""
	I1213 18:40:40.581147   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:40.581455   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:41.080649   38829 type.go:168] "Request Body" body=""
	I1213 18:40:41.080720   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:41.080968   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:41.580717   38829 type.go:168] "Request Body" body=""
	I1213 18:40:41.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:41.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:42.080793   38829 type.go:168] "Request Body" body=""
	I1213 18:40:42.080889   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:42.081224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:42.580774   38829 type.go:168] "Request Body" body=""
	I1213 18:40:42.580846   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:42.581129   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:42.581171   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:43.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:40:43.080889   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:43.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:43.580912   38829 type.go:168] "Request Body" body=""
	I1213 18:40:43.581022   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:43.581350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:44.081100   38829 type.go:168] "Request Body" body=""
	I1213 18:40:44.081184   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:44.081466   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:44.581295   38829 type.go:168] "Request Body" body=""
	I1213 18:40:44.581368   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:44.581680   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:44.581735   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:45.081574   38829 type.go:168] "Request Body" body=""
	I1213 18:40:45.081671   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:45.082057   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:45.580753   38829 type.go:168] "Request Body" body=""
	I1213 18:40:45.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:45.581123   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:46.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:40:46.080807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:46.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:46.580875   38829 type.go:168] "Request Body" body=""
	I1213 18:40:46.580954   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:46.581347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:47.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:40:47.080843   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:47.081169   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:47.081222   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:47.580721   38829 type.go:168] "Request Body" body=""
	I1213 18:40:47.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:47.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:48.080733   38829 type.go:168] "Request Body" body=""
	I1213 18:40:48.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:48.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:48.581574   38829 type.go:168] "Request Body" body=""
	I1213 18:40:48.581646   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:48.581923   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:49.080895   38829 type.go:168] "Request Body" body=""
	I1213 18:40:49.080969   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:49.081284   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:49.081332   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:49.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:49.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:49.581189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:50.080877   38829 type.go:168] "Request Body" body=""
	I1213 18:40:50.080951   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:50.081313   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:50.580740   38829 type.go:168] "Request Body" body=""
	I1213 18:40:50.580817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:50.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:51.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:40:51.080811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:51.081140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:51.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:51.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:51.581094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:51.581147   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:52.080738   38829 type.go:168] "Request Body" body=""
	I1213 18:40:52.080814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:52.081156   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:52.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:40:52.580781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:52.581124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:53.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:53.080737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:53.081101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:53.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:53.580737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:53.581073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:54.081075   38829 type.go:168] "Request Body" body=""
	I1213 18:40:54.081153   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:54.081490   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:54.081544   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:54.580688   38829 type.go:168] "Request Body" body=""
	I1213 18:40:54.580770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:54.581090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:55.080755   38829 type.go:168] "Request Body" body=""
	I1213 18:40:55.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:55.081218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:55.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:55.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:55.581128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:56.080828   38829 type.go:168] "Request Body" body=""
	I1213 18:40:56.080907   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:56.081254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:56.580945   38829 type.go:168] "Request Body" body=""
	I1213 18:40:56.581061   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:56.581383   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:56.581438   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:57.081145   38829 type.go:168] "Request Body" body=""
	I1213 18:40:57.081219   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:57.081499   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:57.581369   38829 type.go:168] "Request Body" body=""
	I1213 18:40:57.581461   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:57.581753   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:58.081564   38829 type.go:168] "Request Body" body=""
	I1213 18:40:58.081635   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:58.081964   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:58.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:40:58.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:58.581151   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:59.081182   38829 type.go:168] "Request Body" body=""
	I1213 18:40:59.081258   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:59.081514   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:59.081555   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:59.581349   38829 type.go:168] "Request Body" body=""
	I1213 18:40:59.581423   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:59.581720   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:00.081815   38829 type.go:168] "Request Body" body=""
	I1213 18:41:00.081903   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:00.082221   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:00.581646   38829 type.go:168] "Request Body" body=""
	I1213 18:41:00.581716   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:00.582021   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:01.080712   38829 type.go:168] "Request Body" body=""
	I1213 18:41:01.080792   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:01.081087   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:01.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:41:01.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:01.581320   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:01.581376   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:02.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:41:02.080888   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:02.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:02.580849   38829 type.go:168] "Request Body" body=""
	I1213 18:41:02.580920   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:02.581274   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:03.080853   38829 type.go:168] "Request Body" body=""
	I1213 18:41:03.080929   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:03.081297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:03.580687   38829 type.go:168] "Request Body" body=""
	I1213 18:41:03.580761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:03.581113   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:04.080818   38829 type.go:168] "Request Body" body=""
	I1213 18:41:04.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:04.081231   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:04.081279   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:04.580784   38829 type.go:168] "Request Body" body=""
	I1213 18:41:04.580861   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:04.581254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:05.080702   38829 type.go:168] "Request Body" body=""
	I1213 18:41:05.080774   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:05.081067   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:05.580726   38829 type.go:168] "Request Body" body=""
	I1213 18:41:05.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:05.581149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:06.080754   38829 type.go:168] "Request Body" body=""
	I1213 18:41:06.080824   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:06.081183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:06.580809   38829 type.go:168] "Request Body" body=""
	I1213 18:41:06.580876   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:06.581193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:06.581275   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:07.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:41:07.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:07.081155   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:07.580864   38829 type.go:168] "Request Body" body=""
	I1213 18:41:07.580935   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:07.581293   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:08.080815   38829 type.go:168] "Request Body" body=""
	I1213 18:41:08.080882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:08.081228   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:08.581184   38829 type.go:168] "Request Body" body=""
	I1213 18:41:08.581267   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:08.581600   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:08.581650   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:09.081329   38829 type.go:168] "Request Body" body=""
	I1213 18:41:09.081400   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:09.081701   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:09.581386   38829 type.go:168] "Request Body" body=""
	I1213 18:41:09.581459   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:09.581736   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:10.081624   38829 type.go:168] "Request Body" body=""
	I1213 18:41:10.081709   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:10.082054   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:10.580758   38829 type.go:168] "Request Body" body=""
	I1213 18:41:10.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:10.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:11.080690   38829 type.go:168] "Request Body" body=""
	I1213 18:41:11.080767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:11.081130   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:11.081225   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:11.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:41:11.580838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:11.581297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:12.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:41:12.081129   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:12.081449   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:12.581247   38829 type.go:168] "Request Body" body=""
	I1213 18:41:12.581315   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:12.581576   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:13.080944   38829 type.go:168] "Request Body" body=""
	I1213 18:41:13.081031   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:13.081378   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:13.081435   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:13.580973   38829 type.go:168] "Request Body" body=""
	I1213 18:41:13.581116   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:13.581497   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:14.081648   38829 type.go:168] "Request Body" body=""
	I1213 18:41:14.081731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:14.082000   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:14.580709   38829 type.go:168] "Request Body" body=""
	I1213 18:41:14.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:14.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:15.080870   38829 type.go:168] "Request Body" body=""
	I1213 18:41:15.080947   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:15.081336   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:15.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:41:15.580729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:15.581047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:15.581086   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:16.080721   38829 type.go:168] "Request Body" body=""
	I1213 18:41:16.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:16.081148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:16.580760   38829 type.go:168] "Request Body" body=""
	I1213 18:41:16.580840   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:16.581166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:17.080685   38829 type.go:168] "Request Body" body=""
	I1213 18:41:17.080772   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:17.081106   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:17.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:17.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:17.581116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:17.581162   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:18.080745   38829 type.go:168] "Request Body" body=""
	I1213 18:41:18.080820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:18.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:18.581224   38829 type.go:168] "Request Body" body=""
	I1213 18:41:18.581296   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:18.581580   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:19.081352   38829 type.go:168] "Request Body" body=""
	I1213 18:41:19.081427   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:19.081734   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:19.581454   38829 type.go:168] "Request Body" body=""
	I1213 18:41:19.581571   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:19.581908   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:19.581960   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:20.081575   38829 type.go:168] "Request Body" body=""
	I1213 18:41:20.081653   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:20.081930   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:20.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:41:20.580722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:20.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:21.080807   38829 type.go:168] "Request Body" body=""
	I1213 18:41:21.080885   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:21.081222   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:21.580675   38829 type.go:168] "Request Body" body=""
	I1213 18:41:21.580755   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:21.581125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:22.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:41:22.080789   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:22.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:22.081174   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:22.580748   38829 type.go:168] "Request Body" body=""
	I1213 18:41:22.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:22.581169   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:23.080686   38829 type.go:168] "Request Body" body=""
	I1213 18:41:23.080758   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:23.081067   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:23.580652   38829 type.go:168] "Request Body" body=""
	I1213 18:41:23.580733   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:23.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:24.081615   38829 type.go:168] "Request Body" body=""
	I1213 18:41:24.081701   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:24.082028   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:24.082086   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:24.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:41:24.580790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:24.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:25.080723   38829 type.go:168] "Request Body" body=""
	I1213 18:41:25.080800   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:25.081135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:25.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:41:25.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:25.581183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:26.080778   38829 type.go:168] "Request Body" body=""
	I1213 18:41:26.080846   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:26.081178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:26.580887   38829 type.go:168] "Request Body" body=""
	I1213 18:41:26.580963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:26.581315   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:26.581370   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:27.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:41:27.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:27.081128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:27.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:27.580741   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:27.581056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:28.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:41:28.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:28.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:28.580902   38829 type.go:168] "Request Body" body=""
	I1213 18:41:28.580974   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:28.581301   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:29.080749   38829 type.go:168] "Request Body" body=""
	I1213 18:41:29.080817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:29.081091   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:29.081132   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:29.580839   38829 type.go:168] "Request Body" body=""
	I1213 18:41:29.580981   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:29.581329   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:30.080766   38829 type.go:168] "Request Body" body=""
	I1213 18:41:30.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:30.081270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:30.580990   38829 type.go:168] "Request Body" body=""
	I1213 18:41:30.581076   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:30.581343   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:31.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:41:31.080787   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:31.081149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:31.081200   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:31.580852   38829 type.go:168] "Request Body" body=""
	I1213 18:41:31.580935   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:31.581309   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:32.080976   38829 type.go:168] "Request Body" body=""
	I1213 18:41:32.081071   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:32.081376   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:32.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:41:32.580812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:32.581179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:33.080899   38829 type.go:168] "Request Body" body=""
	I1213 18:41:33.080979   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:33.081353   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:33.081413   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:33.580694   38829 type.go:168] "Request Body" body=""
	I1213 18:41:33.580774   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:33.581069   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:34.081613   38829 type.go:168] "Request Body" body=""
	I1213 18:41:34.081689   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:34.082033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:34.580727   38829 type.go:168] "Request Body" body=""
	I1213 18:41:34.580828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:34.581146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:35.080790   38829 type.go:168] "Request Body" body=""
	I1213 18:41:35.080863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:35.081157   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:35.580696   38829 type.go:168] "Request Body" body=""
	I1213 18:41:35.580790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:35.581078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:35.581121   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:36.080756   38829 type.go:168] "Request Body" body=""
	I1213 18:41:36.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:36.081282   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:36.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:36.580739   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:36.581032   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:37.080757   38829 type.go:168] "Request Body" body=""
	I1213 18:41:37.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:37.081179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:37.580859   38829 type.go:168] "Request Body" body=""
	I1213 18:41:37.580931   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:37.581253   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:37.581299   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:38.080940   38829 type.go:168] "Request Body" body=""
	I1213 18:41:38.081033   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:38.081302   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:38.581248   38829 type.go:168] "Request Body" body=""
	I1213 18:41:38.581332   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:38.581671   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:39.081578   38829 type.go:168] "Request Body" body=""
	I1213 18:41:39.081659   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:39.081987   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:39.580653   38829 type.go:168] "Request Body" body=""
	I1213 18:41:39.580729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:39.581076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:40.080757   38829 type.go:168] "Request Body" body=""
	I1213 18:41:40.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:40.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:40.081257   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:40.580739   38829 type.go:168] "Request Body" body=""
	I1213 18:41:40.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:40.581120   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:41.080675   38829 type.go:168] "Request Body" body=""
	I1213 18:41:41.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:41.081085   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:41.580789   38829 type.go:168] "Request Body" body=""
	I1213 18:41:41.580862   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:41.581170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:42.080802   38829 type.go:168] "Request Body" body=""
	I1213 18:41:42.080877   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:42.081216   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:42.580919   38829 type.go:168] "Request Body" body=""
	I1213 18:41:42.580994   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:42.581286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:42.581339   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:43.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:41:43.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:43.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:43.580933   38829 type.go:168] "Request Body" body=""
	I1213 18:41:43.581025   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:43.581344   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:44.081112   38829 type.go:168] "Request Body" body=""
	I1213 18:41:44.081178   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:44.081445   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:44.581279   38829 type.go:168] "Request Body" body=""
	I1213 18:41:44.581350   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:44.581653   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:44.581708   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:45.081520   38829 type.go:168] "Request Body" body=""
	I1213 18:41:45.081600   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:45.081937   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:45.580652   38829 type.go:168] "Request Body" body=""
	I1213 18:41:45.580731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:45.581051   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:46.080751   38829 type.go:168] "Request Body" body=""
	I1213 18:41:46.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:46.081265   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:46.580968   38829 type.go:168] "Request Body" body=""
	I1213 18:41:46.581065   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:46.581388   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:47.080619   38829 type.go:168] "Request Body" body=""
	I1213 18:41:47.080685   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:47.080942   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:47.080980   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:47.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:47.580743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:47.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:48.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:41:48.080842   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:48.081166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:48.581104   38829 type.go:168] "Request Body" body=""
	I1213 18:41:48.581172   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:48.581434   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:49.081502   38829 type.go:168] "Request Body" body=""
	I1213 18:41:49.081574   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:49.081903   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:49.081968   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:49.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:41:49.580722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:49.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:50.080709   38829 type.go:168] "Request Body" body=""
	I1213 18:41:50.080785   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:50.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:50.580720   38829 type.go:168] "Request Body" body=""
	I1213 18:41:50.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:50.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:51.080888   38829 type.go:168] "Request Body" body=""
	I1213 18:41:51.080963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:51.081279   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:51.580674   38829 type.go:168] "Request Body" body=""
	I1213 18:41:51.580740   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:51.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:51.581128   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:52.080773   38829 type.go:168] "Request Body" body=""
	I1213 18:41:52.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:52.081249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:52.580793   38829 type.go:168] "Request Body" body=""
	I1213 18:41:52.580867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:52.581218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:53.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:41:53.080781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:53.081080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:53.580683   38829 type.go:168] "Request Body" body=""
	I1213 18:41:53.580763   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:53.581106   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:53.581159   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:54.080735   38829 type.go:168] "Request Body" body=""
	I1213 18:41:54.080815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:54.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:54.580662   38829 type.go:168] "Request Body" body=""
	I1213 18:41:54.580733   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:54.581088   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:55.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:55.080791   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:55.081154   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:55.580764   38829 type.go:168] "Request Body" body=""
	I1213 18:41:55.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:55.581137   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:55.581182   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:56.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:41:56.080790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:56.081130   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:56.580729   38829 type.go:168] "Request Body" body=""
	I1213 18:41:56.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:56.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:57.080852   38829 type.go:168] "Request Body" body=""
	I1213 18:41:57.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:57.081256   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:57.580921   38829 type.go:168] "Request Body" body=""
	I1213 18:41:57.581000   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:57.581269   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:57.581307   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:58.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:41:58.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:58.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:58.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:58.580799   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:58.581146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:59.081521   38829 type.go:168] "Request Body" body=""
	I1213 18:41:59.081580   38829 node_ready.go:38] duration metric: took 6m0.001077775s for node "functional-752103" to be "Ready" ...
	I1213 18:41:59.084666   38829 out.go:203] 
	W1213 18:41:59.087601   38829 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 18:41:59.087625   38829 out.go:285] * 
	* 
	W1213 18:41:59.089766   38829 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:41:59.092666   38829 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-752103 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m5.669841733s for "functional-752103" cluster.
I1213 18:41:59.626144    4637 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-752103
helpers_test.go:244: (dbg) docker inspect functional-752103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	        "Created": "2025-12-13T18:27:36.869398923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:27:36.933863328Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hosts",
	        "LogPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b-json.log",
	        "Name": "/functional-752103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-752103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-752103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	                "LowerDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-752103",
	                "Source": "/var/lib/docker/volumes/functional-752103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-752103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-752103",
	                "name.minikube.sigs.k8s.io": "functional-752103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625ea12887c8956887678f2408d6edd5b98f62bce458a6906f4f662a3001a53b",
	            "SandboxKey": "/var/run/docker/netns/625ea12887c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-752103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:2c:83:4a:30:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84df48e9f7dac8c6a1b67709e5eea216d99d3f16eb50b96c7f0e4a82b3193d56",
	                    "EndpointID": "e69b1f9610d40396647a2d78f0170c31b9cd8e641fc8465e742649cccee8e591",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-752103",
	                        "d72b547cdcc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103: exit status 2 (431.035864ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-752103 logs -n 25: (1.308212636s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-350101 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image load --daemon kicbase/echo-server:functional-350101 --alsologtostderr                                                             │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh            │ functional-350101 ssh sudo cat /etc/ssl/certs/46372.pem                                                                                                   │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh            │ functional-350101 ssh sudo cat /usr/share/ca-certificates/46372.pem                                                                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls                                                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh            │ functional-350101 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image save kicbase/echo-server:functional-350101 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image rm kicbase/echo-server:functional-350101 --alsologtostderr                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls                                                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ update-context │ functional-350101 update-context --alsologtostderr -v=2                                                                                                   │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls                                                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ update-context │ functional-350101 update-context --alsologtostderr -v=2                                                                                                   │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ update-context │ functional-350101 update-context --alsologtostderr -v=2                                                                                                   │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image save --daemon kicbase/echo-server:functional-350101 --alsologtostderr                                                             │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls --format yaml --alsologtostderr                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls --format short --alsologtostderr                                                                                               │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh            │ functional-350101 ssh pgrep buildkitd                                                                                                                     │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	│ image          │ functional-350101 image ls --format json --alsologtostderr                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls --format table --alsologtostderr                                                                                               │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image build -t localhost/my-image:functional-350101 testdata/build --alsologtostderr                                                    │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls                                                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ delete         │ -p functional-350101                                                                                                                                      │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ start          │ -p functional-752103 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	│ start          │ -p functional-752103 --alsologtostderr -v=8                                                                                                               │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:35 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:35:53
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:35:53.999245   38829 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:35:53.999434   38829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:35:53.999464   38829 out.go:374] Setting ErrFile to fd 2...
	I1213 18:35:53.999486   38829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:35:53.999778   38829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:35:54.000250   38829 out.go:368] Setting JSON to false
	I1213 18:35:54.001308   38829 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4706,"bootTime":1765646248,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:35:54.001457   38829 start.go:143] virtualization:  
	I1213 18:35:54.010388   38829 out.go:179] * [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:35:54.014157   38829 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:35:54.014353   38829 notify.go:221] Checking for updates...
	I1213 18:35:54.020075   38829 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:35:54.023186   38829 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:54.026171   38829 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:35:54.029213   38829 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:35:54.032235   38829 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:35:54.035744   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:54.035909   38829 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:35:54.059624   38829 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:35:54.059744   38829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:35:54.127464   38829 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:35:54.118134446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:35:54.127571   38829 docker.go:319] overlay module found
	I1213 18:35:54.130605   38829 out.go:179] * Using the docker driver based on existing profile
	I1213 18:35:54.133521   38829 start.go:309] selected driver: docker
	I1213 18:35:54.133548   38829 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:54.133668   38829 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:35:54.133779   38829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:35:54.194306   38829 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:35:54.184244205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:35:54.194716   38829 cni.go:84] Creating CNI manager for ""
	I1213 18:35:54.194772   38829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:35:54.194827   38829 start.go:353] cluster config:
	{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:54.197953   38829 out.go:179] * Starting "functional-752103" primary control-plane node in "functional-752103" cluster
	I1213 18:35:54.200965   38829 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:35:54.203964   38829 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:35:54.207111   38829 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:35:54.207169   38829 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 18:35:54.207189   38829 cache.go:65] Caching tarball of preloaded images
	I1213 18:35:54.207200   38829 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:35:54.207268   38829 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 18:35:54.207278   38829 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 18:35:54.207380   38829 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/config.json ...
	I1213 18:35:54.226684   38829 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 18:35:54.226707   38829 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 18:35:54.226736   38829 cache.go:243] Successfully downloaded all kic artifacts
	I1213 18:35:54.226765   38829 start.go:360] acquireMachinesLock for functional-752103: {Name:mkf4ec1d9e1836ef54983db4562aedfd1a9c51c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 18:35:54.226834   38829 start.go:364] duration metric: took 45.136µs to acquireMachinesLock for "functional-752103"
	I1213 18:35:54.226856   38829 start.go:96] Skipping create...Using existing machine configuration
	I1213 18:35:54.226865   38829 fix.go:54] fixHost starting: 
	I1213 18:35:54.227126   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:54.245088   38829 fix.go:112] recreateIfNeeded on functional-752103: state=Running err=<nil>
	W1213 18:35:54.245125   38829 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 18:35:54.248193   38829 out.go:252] * Updating the running docker "functional-752103" container ...
	I1213 18:35:54.248225   38829 machine.go:94] provisionDockerMachine start ...
	I1213 18:35:54.248302   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.265418   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.265750   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.265765   38829 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 18:35:54.412628   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:35:54.412654   38829 ubuntu.go:182] provisioning hostname "functional-752103"
	I1213 18:35:54.412716   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.431532   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.431834   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.431851   38829 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-752103 && echo "functional-752103" | sudo tee /etc/hostname
	I1213 18:35:54.592050   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:35:54.592214   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.614592   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.614908   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.614930   38829 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-752103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-752103/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-752103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 18:35:54.769516   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 18:35:54.769546   38829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 18:35:54.769572   38829 ubuntu.go:190] setting up certificates
	I1213 18:35:54.769581   38829 provision.go:84] configureAuth start
	I1213 18:35:54.769640   38829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:35:54.787462   38829 provision.go:143] copyHostCerts
	I1213 18:35:54.787509   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:35:54.787551   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 18:35:54.787563   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:35:54.787650   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 18:35:54.787740   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:35:54.787760   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 18:35:54.787765   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:35:54.787800   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 18:35:54.787845   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:35:54.787868   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 18:35:54.787877   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:35:54.787902   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 18:35:54.787955   38829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.functional-752103 san=[127.0.0.1 192.168.49.2 functional-752103 localhost minikube]
	I1213 18:35:54.878725   38829 provision.go:177] copyRemoteCerts
	I1213 18:35:54.878794   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 18:35:54.878839   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.895961   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.009601   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 18:35:55.009696   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 18:35:55.033852   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 18:35:55.033923   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 18:35:55.052749   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 18:35:55.052813   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 18:35:55.072069   38829 provision.go:87] duration metric: took 302.464055ms to configureAuth
	I1213 18:35:55.072107   38829 ubuntu.go:206] setting minikube options for container-runtime
	I1213 18:35:55.072313   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:55.072426   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.092406   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:55.092745   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:55.092771   38829 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 18:35:55.413226   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 18:35:55.413251   38829 machine.go:97] duration metric: took 1.16501875s to provisionDockerMachine
	I1213 18:35:55.413264   38829 start.go:293] postStartSetup for "functional-752103" (driver="docker")
	I1213 18:35:55.413300   38829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 18:35:55.413403   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 18:35:55.413470   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.430709   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.537093   38829 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 18:35:55.540324   38829 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 18:35:55.540345   38829 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 18:35:55.540349   38829 command_runner.go:130] > VERSION_ID="12"
	I1213 18:35:55.540354   38829 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 18:35:55.540359   38829 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 18:35:55.540363   38829 command_runner.go:130] > ID=debian
	I1213 18:35:55.540368   38829 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 18:35:55.540373   38829 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 18:35:55.540379   38829 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 18:35:55.540743   38829 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 18:35:55.540767   38829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 18:35:55.540779   38829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 18:35:55.540839   38829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 18:35:55.540926   38829 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 18:35:55.540938   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 18:35:55.541035   38829 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> hosts in /etc/test/nested/copy/4637
	I1213 18:35:55.541044   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> /etc/test/nested/copy/4637/hosts
	I1213 18:35:55.541087   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4637
	I1213 18:35:55.548955   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:35:55.566460   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts --> /etc/test/nested/copy/4637/hosts (40 bytes)
	I1213 18:35:55.584163   38829 start.go:296] duration metric: took 170.869499ms for postStartSetup
	I1213 18:35:55.584240   38829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 18:35:55.584294   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.601966   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.706486   38829 command_runner.go:130] > 11%
	I1213 18:35:55.706569   38829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 18:35:55.711597   38829 command_runner.go:130] > 174G
	I1213 18:35:55.711643   38829 fix.go:56] duration metric: took 1.484775946s for fixHost
	I1213 18:35:55.711654   38829 start.go:83] releasing machines lock for "functional-752103", held for 1.484809349s
	I1213 18:35:55.711733   38829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:35:55.731505   38829 ssh_runner.go:195] Run: cat /version.json
	I1213 18:35:55.731524   38829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 18:35:55.731557   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.731578   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.752781   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.757282   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.945606   38829 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 18:35:55.945674   38829 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 18:35:55.945816   38829 ssh_runner.go:195] Run: systemctl --version
	I1213 18:35:55.951961   38829 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 18:35:55.951999   38829 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 18:35:55.952322   38829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 18:35:55.992229   38829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 18:35:56.001527   38829 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 18:35:56.001762   38829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 18:35:56.001849   38829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 18:35:56.014010   38829 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 18:35:56.014037   38829 start.go:496] detecting cgroup driver to use...
	I1213 18:35:56.014094   38829 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 18:35:56.014182   38829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 18:35:56.030879   38829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 18:35:56.046797   38829 docker.go:218] disabling cri-docker service (if available) ...
	I1213 18:35:56.046882   38829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 18:35:56.067384   38829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 18:35:56.080815   38829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 18:35:56.192099   38829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 18:35:56.317541   38829 docker.go:234] disabling docker service ...
	I1213 18:35:56.317693   38829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 18:35:56.332696   38829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 18:35:56.345912   38829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 18:35:56.463560   38829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 18:35:56.579100   38829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 18:35:56.592582   38829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 18:35:56.605285   38829 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 18:35:56.606432   38829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 18:35:56.606495   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.615251   38829 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 18:35:56.615329   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.624699   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.633587   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.642744   38829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 18:35:56.651128   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.660108   38829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.669661   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.678839   38829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 18:35:56.685773   38829 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 18:35:56.686744   38829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 18:35:56.694432   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:56.830483   38829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 18:35:57.005048   38829 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 18:35:57.005450   38829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 18:35:57.010285   38829 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 18:35:57.010309   38829 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 18:35:57.010316   38829 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1213 18:35:57.010333   38829 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 18:35:57.010338   38829 command_runner.go:130] > Access: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010348   38829 command_runner.go:130] > Modify: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010355   38829 command_runner.go:130] > Change: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010364   38829 command_runner.go:130] >  Birth: -
	I1213 18:35:57.010406   38829 start.go:564] Will wait 60s for crictl version
	I1213 18:35:57.010459   38829 ssh_runner.go:195] Run: which crictl
	I1213 18:35:57.014231   38829 command_runner.go:130] > /usr/local/bin/crictl
	I1213 18:35:57.014339   38829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 18:35:57.039763   38829 command_runner.go:130] > Version:  0.1.0
	I1213 18:35:57.039785   38829 command_runner.go:130] > RuntimeName:  cri-o
	I1213 18:35:57.039789   38829 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 18:35:57.039795   38829 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 18:35:57.039807   38829 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 18:35:57.039886   38829 ssh_runner.go:195] Run: crio --version
	I1213 18:35:57.067200   38829 command_runner.go:130] > crio version 1.34.3
	I1213 18:35:57.067289   38829 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 18:35:57.067311   38829 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 18:35:57.067352   38829 command_runner.go:130] >    GitTreeState:   dirty
	I1213 18:35:57.067376   38829 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 18:35:57.067397   38829 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 18:35:57.067430   38829 command_runner.go:130] >    Compiler:       gc
	I1213 18:35:57.067455   38829 command_runner.go:130] >    Platform:       linux/arm64
	I1213 18:35:57.067476   38829 command_runner.go:130] >    Linkmode:       static
	I1213 18:35:57.067513   38829 command_runner.go:130] >    BuildTags:
	I1213 18:35:57.067537   38829 command_runner.go:130] >      static
	I1213 18:35:57.067557   38829 command_runner.go:130] >      netgo
	I1213 18:35:57.067592   38829 command_runner.go:130] >      osusergo
	I1213 18:35:57.067614   38829 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 18:35:57.067632   38829 command_runner.go:130] >      seccomp
	I1213 18:35:57.067651   38829 command_runner.go:130] >      apparmor
	I1213 18:35:57.067685   38829 command_runner.go:130] >      selinux
	I1213 18:35:57.067706   38829 command_runner.go:130] >    LDFlags:          unknown
	I1213 18:35:57.067726   38829 command_runner.go:130] >    SeccompEnabled:   true
	I1213 18:35:57.067760   38829 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 18:35:57.069374   38829 ssh_runner.go:195] Run: crio --version
	I1213 18:35:57.097856   38829 command_runner.go:130] > crio version 1.34.3
	I1213 18:35:57.097937   38829 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 18:35:57.097971   38829 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 18:35:57.098005   38829 command_runner.go:130] >    GitTreeState:   dirty
	I1213 18:35:57.098025   38829 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 18:35:57.098058   38829 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 18:35:57.098082   38829 command_runner.go:130] >    Compiler:       gc
	I1213 18:35:57.098103   38829 command_runner.go:130] >    Platform:       linux/arm64
	I1213 18:35:57.098156   38829 command_runner.go:130] >    Linkmode:       static
	I1213 18:35:57.098180   38829 command_runner.go:130] >    BuildTags:
	I1213 18:35:57.098200   38829 command_runner.go:130] >      static
	I1213 18:35:57.098234   38829 command_runner.go:130] >      netgo
	I1213 18:35:57.098253   38829 command_runner.go:130] >      osusergo
	I1213 18:35:57.098277   38829 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 18:35:57.098306   38829 command_runner.go:130] >      seccomp
	I1213 18:35:57.098328   38829 command_runner.go:130] >      apparmor
	I1213 18:35:57.098348   38829 command_runner.go:130] >      selinux
	I1213 18:35:57.098384   38829 command_runner.go:130] >    LDFlags:          unknown
	I1213 18:35:57.098407   38829 command_runner.go:130] >    SeccompEnabled:   true
	I1213 18:35:57.098425   38829 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 18:35:57.103998   38829 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 18:35:57.106795   38829 cli_runner.go:164] Run: docker network inspect functional-752103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:35:57.122531   38829 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 18:35:57.126557   38829 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 18:35:57.126659   38829 kubeadm.go:884] updating cluster {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 18:35:57.126789   38829 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:35:57.126855   38829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:35:57.159258   38829 command_runner.go:130] > {
	I1213 18:35:57.159281   38829 command_runner.go:130] >   "images":  [
	I1213 18:35:57.159286   38829 command_runner.go:130] >     {
	I1213 18:35:57.159295   38829 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 18:35:57.159299   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159305   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 18:35:57.159309   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159312   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159321   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 18:35:57.159333   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 18:35:57.159349   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159354   38829 command_runner.go:130] >       "size":  "111333938",
	I1213 18:35:57.159358   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159370   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159373   38829 command_runner.go:130] >     },
	I1213 18:35:57.159376   38829 command_runner.go:130] >     {
	I1213 18:35:57.159382   38829 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 18:35:57.159389   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159394   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 18:35:57.159398   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159402   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159410   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 18:35:57.159421   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 18:35:57.159425   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159429   38829 command_runner.go:130] >       "size":  "29037500",
	I1213 18:35:57.159435   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159443   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159450   38829 command_runner.go:130] >     },
	I1213 18:35:57.159453   38829 command_runner.go:130] >     {
	I1213 18:35:57.159459   38829 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 18:35:57.159466   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159471   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 18:35:57.159474   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159481   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159489   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 18:35:57.159500   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 18:35:57.159504   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159508   38829 command_runner.go:130] >       "size":  "74491780",
	I1213 18:35:57.159514   38829 command_runner.go:130] >       "username":  "nonroot",
	I1213 18:35:57.159519   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159526   38829 command_runner.go:130] >     },
	I1213 18:35:57.159529   38829 command_runner.go:130] >     {
	I1213 18:35:57.159536   38829 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 18:35:57.159548   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159554   38829 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 18:35:57.159560   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159564   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159572   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 18:35:57.159582   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 18:35:57.159586   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159596   38829 command_runner.go:130] >       "size":  "60857170",
	I1213 18:35:57.159600   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159604   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159607   38829 command_runner.go:130] >       },
	I1213 18:35:57.159618   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159626   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159629   38829 command_runner.go:130] >     },
	I1213 18:35:57.159633   38829 command_runner.go:130] >     {
	I1213 18:35:57.159646   38829 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 18:35:57.159650   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159655   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 18:35:57.159661   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159665   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159673   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 18:35:57.159684   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 18:35:57.159687   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159691   38829 command_runner.go:130] >       "size":  "84949999",
	I1213 18:35:57.159697   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159701   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159706   38829 command_runner.go:130] >       },
	I1213 18:35:57.159710   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159720   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159723   38829 command_runner.go:130] >     },
	I1213 18:35:57.159726   38829 command_runner.go:130] >     {
	I1213 18:35:57.159733   38829 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 18:35:57.159740   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159750   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 18:35:57.159756   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159762   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159771   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 18:35:57.159782   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 18:35:57.159786   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159790   38829 command_runner.go:130] >       "size":  "72170325",
	I1213 18:35:57.159794   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159800   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159804   38829 command_runner.go:130] >       },
	I1213 18:35:57.159810   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159814   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159820   38829 command_runner.go:130] >     },
	I1213 18:35:57.159823   38829 command_runner.go:130] >     {
	I1213 18:35:57.159829   38829 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 18:35:57.159836   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159841   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 18:35:57.159847   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159851   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159859   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 18:35:57.159870   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 18:35:57.159874   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159878   38829 command_runner.go:130] >       "size":  "74106775",
	I1213 18:35:57.159882   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159888   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159892   38829 command_runner.go:130] >     },
	I1213 18:35:57.159897   38829 command_runner.go:130] >     {
	I1213 18:35:57.159904   38829 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 18:35:57.159910   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159916   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 18:35:57.159926   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159934   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159942   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 18:35:57.159966   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 18:35:57.159973   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159977   38829 command_runner.go:130] >       "size":  "49822549",
	I1213 18:35:57.159981   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159985   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159991   38829 command_runner.go:130] >       },
	I1213 18:35:57.159995   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.160003   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.160008   38829 command_runner.go:130] >     },
	I1213 18:35:57.160011   38829 command_runner.go:130] >     {
	I1213 18:35:57.160017   38829 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 18:35:57.160025   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.160030   38829 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.160033   38829 command_runner.go:130] >       ],
	I1213 18:35:57.160040   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.160048   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 18:35:57.160059   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 18:35:57.160063   38829 command_runner.go:130] >       ],
	I1213 18:35:57.160067   38829 command_runner.go:130] >       "size":  "519884",
	I1213 18:35:57.160070   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.160077   38829 command_runner.go:130] >         "value":  "65535"
	I1213 18:35:57.160080   38829 command_runner.go:130] >       },
	I1213 18:35:57.160084   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.160093   38829 command_runner.go:130] >       "pinned":  true
	I1213 18:35:57.160096   38829 command_runner.go:130] >     }
	I1213 18:35:57.160101   38829 command_runner.go:130] >   ]
	I1213 18:35:57.160112   38829 command_runner.go:130] > }
	I1213 18:35:57.162388   38829 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:35:57.162414   38829 crio.go:433] Images already preloaded, skipping extraction
	I1213 18:35:57.162470   38829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:35:57.186777   38829 command_runner.go:130] > {
	I1213 18:35:57.186796   38829 command_runner.go:130] >   "images":  [
	I1213 18:35:57.186801   38829 command_runner.go:130] >     {
	I1213 18:35:57.186817   38829 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 18:35:57.186822   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186828   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 18:35:57.186832   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186836   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186846   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 18:35:57.186854   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 18:35:57.186857   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186861   38829 command_runner.go:130] >       "size":  "111333938",
	I1213 18:35:57.186865   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.186873   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.186877   38829 command_runner.go:130] >     },
	I1213 18:35:57.186880   38829 command_runner.go:130] >     {
	I1213 18:35:57.186886   38829 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 18:35:57.186890   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186895   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 18:35:57.186898   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186902   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186913   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 18:35:57.186921   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 18:35:57.186928   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186933   38829 command_runner.go:130] >       "size":  "29037500",
	I1213 18:35:57.186936   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.186942   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.186945   38829 command_runner.go:130] >     },
	I1213 18:35:57.186948   38829 command_runner.go:130] >     {
	I1213 18:35:57.186954   38829 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 18:35:57.186958   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186963   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 18:35:57.186966   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186970   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186977   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 18:35:57.186985   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 18:35:57.186992   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186996   38829 command_runner.go:130] >       "size":  "74491780",
	I1213 18:35:57.187000   38829 command_runner.go:130] >       "username":  "nonroot",
	I1213 18:35:57.187004   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187007   38829 command_runner.go:130] >     },
	I1213 18:35:57.187009   38829 command_runner.go:130] >     {
	I1213 18:35:57.187016   38829 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 18:35:57.187020   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187024   38829 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 18:35:57.187029   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187033   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187041   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 18:35:57.187050   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 18:35:57.187053   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187057   38829 command_runner.go:130] >       "size":  "60857170",
	I1213 18:35:57.187061   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187064   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187067   38829 command_runner.go:130] >       },
	I1213 18:35:57.187075   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187079   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187082   38829 command_runner.go:130] >     },
	I1213 18:35:57.187085   38829 command_runner.go:130] >     {
	I1213 18:35:57.187092   38829 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 18:35:57.187095   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187101   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 18:35:57.187104   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187108   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187115   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 18:35:57.187123   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 18:35:57.187126   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187130   38829 command_runner.go:130] >       "size":  "84949999",
	I1213 18:35:57.187134   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187137   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187146   38829 command_runner.go:130] >       },
	I1213 18:35:57.187149   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187153   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187157   38829 command_runner.go:130] >     },
	I1213 18:35:57.187159   38829 command_runner.go:130] >     {
	I1213 18:35:57.187166   38829 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 18:35:57.187170   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187175   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 18:35:57.187178   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187182   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187190   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 18:35:57.187199   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 18:35:57.187202   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187206   38829 command_runner.go:130] >       "size":  "72170325",
	I1213 18:35:57.187209   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187213   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187216   38829 command_runner.go:130] >       },
	I1213 18:35:57.187219   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187223   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187226   38829 command_runner.go:130] >     },
	I1213 18:35:57.187229   38829 command_runner.go:130] >     {
	I1213 18:35:57.187236   38829 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 18:35:57.187239   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187244   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 18:35:57.187247   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187251   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187258   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 18:35:57.187266   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 18:35:57.187269   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187273   38829 command_runner.go:130] >       "size":  "74106775",
	I1213 18:35:57.187277   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187280   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187283   38829 command_runner.go:130] >     },
	I1213 18:35:57.187291   38829 command_runner.go:130] >     {
	I1213 18:35:57.187297   38829 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 18:35:57.187300   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187306   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 18:35:57.187309   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187313   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187321   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 18:35:57.187337   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 18:35:57.187340   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187344   38829 command_runner.go:130] >       "size":  "49822549",
	I1213 18:35:57.187348   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187352   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187355   38829 command_runner.go:130] >       },
	I1213 18:35:57.187358   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187362   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187364   38829 command_runner.go:130] >     },
	I1213 18:35:57.187367   38829 command_runner.go:130] >     {
	I1213 18:35:57.187374   38829 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 18:35:57.187378   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187382   38829 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.187385   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187389   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187396   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 18:35:57.187404   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 18:35:57.187407   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187410   38829 command_runner.go:130] >       "size":  "519884",
	I1213 18:35:57.187414   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187417   38829 command_runner.go:130] >         "value":  "65535"
	I1213 18:35:57.187420   38829 command_runner.go:130] >       },
	I1213 18:35:57.187424   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187428   38829 command_runner.go:130] >       "pinned":  true
	I1213 18:35:57.187431   38829 command_runner.go:130] >     }
	I1213 18:35:57.187434   38829 command_runner.go:130] >   ]
	I1213 18:35:57.187440   38829 command_runner.go:130] > }
	I1213 18:35:57.187570   38829 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:35:57.187578   38829 cache_images.go:86] Images are preloaded, skipping loading
	I1213 18:35:57.187585   38829 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 18:35:57.187672   38829 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-752103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 18:35:57.187756   38829 ssh_runner.go:195] Run: crio config
	I1213 18:35:57.235276   38829 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 18:35:57.235304   38829 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 18:35:57.235312   38829 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 18:35:57.235316   38829 command_runner.go:130] > #
	I1213 18:35:57.235323   38829 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 18:35:57.235330   38829 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 18:35:57.235336   38829 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 18:35:57.235344   38829 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 18:35:57.235351   38829 command_runner.go:130] > # reload'.
	I1213 18:35:57.235358   38829 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 18:35:57.235367   38829 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 18:35:57.235374   38829 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 18:35:57.235386   38829 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 18:35:57.235390   38829 command_runner.go:130] > [crio]
	I1213 18:35:57.235397   38829 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 18:35:57.235406   38829 command_runner.go:130] > # containers images, in this directory.
	I1213 18:35:57.235421   38829 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 18:35:57.235432   38829 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 18:35:57.235437   38829 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 18:35:57.235445   38829 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 18:35:57.235452   38829 command_runner.go:130] > # imagestore = ""
	I1213 18:35:57.235458   38829 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 18:35:57.235468   38829 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 18:35:57.235475   38829 command_runner.go:130] > # storage_driver = "overlay"
	I1213 18:35:57.235481   38829 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 18:35:57.235491   38829 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 18:35:57.235495   38829 command_runner.go:130] > # storage_option = [
	I1213 18:35:57.235502   38829 command_runner.go:130] > # ]
	I1213 18:35:57.235511   38829 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 18:35:57.235518   38829 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 18:35:57.235533   38829 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 18:35:57.235539   38829 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 18:35:57.235547   38829 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 18:35:57.235554   38829 command_runner.go:130] > # always happen on a node reboot
	I1213 18:35:57.235660   38829 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 18:35:57.235692   38829 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 18:35:57.235700   38829 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 18:35:57.235705   38829 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 18:35:57.235710   38829 command_runner.go:130] > # version_file_persist = ""
	I1213 18:35:57.235718   38829 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 18:35:57.235727   38829 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 18:35:57.235730   38829 command_runner.go:130] > # internal_wipe = true
	I1213 18:35:57.235739   38829 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 18:35:57.235744   38829 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 18:35:57.235748   38829 command_runner.go:130] > # internal_repair = true
	I1213 18:35:57.235754   38829 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 18:35:57.235760   38829 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 18:35:57.235769   38829 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 18:35:57.235775   38829 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 18:35:57.235781   38829 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 18:35:57.235784   38829 command_runner.go:130] > [crio.api]
	I1213 18:35:57.235790   38829 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 18:35:57.235795   38829 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 18:35:57.235800   38829 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 18:35:57.235804   38829 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 18:35:57.235811   38829 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 18:35:57.235816   38829 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 18:35:57.235819   38829 command_runner.go:130] > # stream_port = "0"
	I1213 18:35:57.235824   38829 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 18:35:57.235828   38829 command_runner.go:130] > # stream_enable_tls = false
	I1213 18:35:57.235838   38829 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 18:35:57.235842   38829 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 18:35:57.235849   38829 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 18:35:57.235854   38829 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 18:35:57.235858   38829 command_runner.go:130] > # stream_tls_cert = ""
	I1213 18:35:57.235864   38829 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 18:35:57.235869   38829 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 18:35:57.235873   38829 command_runner.go:130] > # stream_tls_key = ""
	I1213 18:35:57.235880   38829 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 18:35:57.235886   38829 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 18:35:57.235892   38829 command_runner.go:130] > # automatically pick up the changes.
	I1213 18:35:57.235896   38829 command_runner.go:130] > # stream_tls_ca = ""
	I1213 18:35:57.235914   38829 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 18:35:57.235918   38829 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 18:35:57.235926   38829 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 18:35:57.235930   38829 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 18:35:57.235936   38829 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 18:35:57.235942   38829 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 18:35:57.235945   38829 command_runner.go:130] > [crio.runtime]
	I1213 18:35:57.235951   38829 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 18:35:57.235956   38829 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 18:35:57.235960   38829 command_runner.go:130] > # "nofile=1024:2048"
	I1213 18:35:57.235965   38829 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 18:35:57.235969   38829 command_runner.go:130] > # default_ulimits = [
	I1213 18:35:57.235972   38829 command_runner.go:130] > # ]
	I1213 18:35:57.235978   38829 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 18:35:57.236231   38829 command_runner.go:130] > # no_pivot = false
	I1213 18:35:57.236246   38829 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 18:35:57.236252   38829 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 18:35:57.236258   38829 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 18:35:57.236264   38829 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 18:35:57.236272   38829 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 18:35:57.236280   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 18:35:57.236292   38829 command_runner.go:130] > # conmon = ""
	I1213 18:35:57.236297   38829 command_runner.go:130] > # Cgroup setting for conmon
	I1213 18:35:57.236304   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 18:35:57.236308   38829 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 18:35:57.236314   38829 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 18:35:57.236320   38829 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 18:35:57.236335   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 18:35:57.236339   38829 command_runner.go:130] > # conmon_env = [
	I1213 18:35:57.236342   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236348   38829 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 18:35:57.236353   38829 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 18:35:57.236358   38829 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 18:35:57.236362   38829 command_runner.go:130] > # default_env = [
	I1213 18:35:57.236365   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236370   38829 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 18:35:57.236378   38829 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 18:35:57.236386   38829 command_runner.go:130] > # selinux = false
	I1213 18:35:57.236397   38829 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 18:35:57.236405   38829 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 18:35:57.236415   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236419   38829 command_runner.go:130] > # seccomp_profile = ""
	I1213 18:35:57.236425   38829 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 18:35:57.236436   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236440   38829 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 18:35:57.236447   38829 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 18:35:57.236457   38829 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 18:35:57.236464   38829 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 18:35:57.236470   38829 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 18:35:57.236477   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236482   38829 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 18:35:57.236493   38829 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 18:35:57.236497   38829 command_runner.go:130] > # the cgroup blockio controller.
	I1213 18:35:57.236501   38829 command_runner.go:130] > # blockio_config_file = ""
	I1213 18:35:57.236512   38829 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 18:35:57.236519   38829 command_runner.go:130] > # blockio parameters.
	I1213 18:35:57.236524   38829 command_runner.go:130] > # blockio_reload = false
	I1213 18:35:57.236530   38829 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 18:35:57.236538   38829 command_runner.go:130] > # irqbalance daemon.
	I1213 18:35:57.236543   38829 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 18:35:57.236550   38829 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 18:35:57.236560   38829 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 18:35:57.236567   38829 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 18:35:57.236573   38829 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 18:35:57.236579   38829 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 18:35:57.236584   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236589   38829 command_runner.go:130] > # rdt_config_file = ""
	I1213 18:35:57.236594   38829 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 18:35:57.236600   38829 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 18:35:57.236606   38829 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 18:35:57.236612   38829 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 18:35:57.236619   38829 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 18:35:57.236626   38829 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 18:35:57.236633   38829 command_runner.go:130] > # will be added.
	I1213 18:35:57.236637   38829 command_runner.go:130] > # default_capabilities = [
	I1213 18:35:57.236640   38829 command_runner.go:130] > # 	"CHOWN",
	I1213 18:35:57.236644   38829 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 18:35:57.236647   38829 command_runner.go:130] > # 	"FSETID",
	I1213 18:35:57.236650   38829 command_runner.go:130] > # 	"FOWNER",
	I1213 18:35:57.236653   38829 command_runner.go:130] > # 	"SETGID",
	I1213 18:35:57.236656   38829 command_runner.go:130] > # 	"SETUID",
	I1213 18:35:57.236674   38829 command_runner.go:130] > # 	"SETPCAP",
	I1213 18:35:57.236679   38829 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 18:35:57.236682   38829 command_runner.go:130] > # 	"KILL",
	I1213 18:35:57.236685   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236693   38829 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 18:35:57.236702   38829 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 18:35:57.236710   38829 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 18:35:57.236716   38829 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 18:35:57.236722   38829 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 18:35:57.236726   38829 command_runner.go:130] > default_sysctls = [
	I1213 18:35:57.236731   38829 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 18:35:57.236734   38829 command_runner.go:130] > ]
	I1213 18:35:57.236738   38829 command_runner.go:130] > # List of devices on the host that a
	I1213 18:35:57.236748   38829 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 18:35:57.236755   38829 command_runner.go:130] > # allowed_devices = [
	I1213 18:35:57.236758   38829 command_runner.go:130] > # 	"/dev/fuse",
	I1213 18:35:57.236762   38829 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 18:35:57.236772   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236777   38829 command_runner.go:130] > # List of additional devices. specified as
	I1213 18:35:57.236784   38829 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 18:35:57.236794   38829 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 18:35:57.236800   38829 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 18:35:57.236804   38829 command_runner.go:130] > # additional_devices = [
	I1213 18:35:57.236832   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236837   38829 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 18:35:57.236841   38829 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 18:35:57.236844   38829 command_runner.go:130] > # 	"/etc/cdi",
	I1213 18:35:57.236848   38829 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 18:35:57.236854   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236861   38829 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 18:35:57.236870   38829 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 18:35:57.236874   38829 command_runner.go:130] > # Defaults to false.
	I1213 18:35:57.236880   38829 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 18:35:57.236891   38829 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 18:35:57.236898   38829 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 18:35:57.236901   38829 command_runner.go:130] > # hooks_dir = [
	I1213 18:35:57.236908   38829 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 18:35:57.236915   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236921   38829 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 18:35:57.236931   38829 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 18:35:57.236939   38829 command_runner.go:130] > # its default mounts from the following two files:
	I1213 18:35:57.236942   38829 command_runner.go:130] > #
	I1213 18:35:57.236949   38829 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 18:35:57.236959   38829 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 18:35:57.236964   38829 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 18:35:57.236967   38829 command_runner.go:130] > #
	I1213 18:35:57.236974   38829 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 18:35:57.236984   38829 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 18:35:57.236990   38829 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 18:35:57.236996   38829 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 18:35:57.237024   38829 command_runner.go:130] > #
	I1213 18:35:57.237029   38829 command_runner.go:130] > # default_mounts_file = ""
	I1213 18:35:57.237035   38829 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 18:35:57.237044   38829 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 18:35:57.237052   38829 command_runner.go:130] > # pids_limit = -1
	I1213 18:35:57.237058   38829 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 18:35:57.237065   38829 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 18:35:57.237075   38829 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 18:35:57.237084   38829 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 18:35:57.237092   38829 command_runner.go:130] > # log_size_max = -1
	I1213 18:35:57.237099   38829 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 18:35:57.237104   38829 command_runner.go:130] > # log_to_journald = false
	I1213 18:35:57.237114   38829 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 18:35:57.237119   38829 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 18:35:57.237125   38829 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 18:35:57.237130   38829 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 18:35:57.237137   38829 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 18:35:57.237145   38829 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 18:35:57.237151   38829 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 18:35:57.237155   38829 command_runner.go:130] > # read_only = false
	I1213 18:35:57.237162   38829 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 18:35:57.237173   38829 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 18:35:57.237181   38829 command_runner.go:130] > # live configuration reload.
	I1213 18:35:57.237191   38829 command_runner.go:130] > # log_level = "info"
	I1213 18:35:57.237200   38829 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 18:35:57.237212   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.237216   38829 command_runner.go:130] > # log_filter = ""
	I1213 18:35:57.237222   38829 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 18:35:57.237228   38829 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 18:35:57.237237   38829 command_runner.go:130] > # separated by comma.
	I1213 18:35:57.237245   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237249   38829 command_runner.go:130] > # uid_mappings = ""
	I1213 18:35:57.237255   38829 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 18:35:57.237265   38829 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 18:35:57.237269   38829 command_runner.go:130] > # separated by comma.
	I1213 18:35:57.237277   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237284   38829 command_runner.go:130] > # gid_mappings = ""
	I1213 18:35:57.237290   38829 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 18:35:57.237297   38829 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 18:35:57.237311   38829 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 18:35:57.237319   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237323   38829 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 18:35:57.237329   38829 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 18:35:57.237339   38829 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 18:35:57.237345   38829 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 18:35:57.237354   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237949   38829 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 18:35:57.237966   38829 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 18:35:57.237972   38829 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 18:35:57.237979   38829 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 18:35:57.238476   38829 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 18:35:57.238490   38829 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 18:35:57.238497   38829 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 18:35:57.238503   38829 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 18:35:57.238519   38829 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 18:35:57.238932   38829 command_runner.go:130] > # drop_infra_ctr = true
	I1213 18:35:57.238947   38829 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 18:35:57.238955   38829 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 18:35:57.238963   38829 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 18:35:57.239291   38829 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 18:35:57.239306   38829 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 18:35:57.239313   38829 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 18:35:57.239319   38829 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 18:35:57.239324   38829 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 18:35:57.239634   38829 command_runner.go:130] > # shared_cpuset = ""
	I1213 18:35:57.239648   38829 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 18:35:57.239654   38829 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 18:35:57.240060   38829 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 18:35:57.240075   38829 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 18:35:57.240414   38829 command_runner.go:130] > # pinns_path = ""
	I1213 18:35:57.240427   38829 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 18:35:57.240434   38829 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 18:35:57.240846   38829 command_runner.go:130] > # enable_criu_support = true
	I1213 18:35:57.240873   38829 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 18:35:57.240881   38829 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 18:35:57.241322   38829 command_runner.go:130] > # enable_pod_events = false
	I1213 18:35:57.241336   38829 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 18:35:57.241342   38829 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 18:35:57.241756   38829 command_runner.go:130] > # default_runtime = "crun"
	I1213 18:35:57.241768   38829 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 18:35:57.241777   38829 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 18:35:57.241786   38829 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 18:35:57.241791   38829 command_runner.go:130] > # creation as a file is not desired either.
	I1213 18:35:57.241800   38829 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 18:35:57.241820   38829 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 18:35:57.242010   38829 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 18:35:57.242355   38829 command_runner.go:130] > # ]
	I1213 18:35:57.242370   38829 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 18:35:57.242386   38829 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 18:35:57.242394   38829 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 18:35:57.242400   38829 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 18:35:57.242406   38829 command_runner.go:130] > #
	I1213 18:35:57.242412   38829 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 18:35:57.242419   38829 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 18:35:57.242423   38829 command_runner.go:130] > # runtime_type = "oci"
	I1213 18:35:57.242427   38829 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 18:35:57.242434   38829 command_runner.go:130] > # inherit_default_runtime = false
	I1213 18:35:57.242441   38829 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 18:35:57.242445   38829 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 18:35:57.242449   38829 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 18:35:57.242460   38829 command_runner.go:130] > # monitor_env = []
	I1213 18:35:57.242465   38829 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 18:35:57.242470   38829 command_runner.go:130] > # allowed_annotations = []
	I1213 18:35:57.242487   38829 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 18:35:57.242491   38829 command_runner.go:130] > # no_sync_log = false
	I1213 18:35:57.242496   38829 command_runner.go:130] > # default_annotations = {}
	I1213 18:35:57.242500   38829 command_runner.go:130] > # stream_websockets = false
	I1213 18:35:57.242507   38829 command_runner.go:130] > # seccomp_profile = ""
	I1213 18:35:57.242553   38829 command_runner.go:130] > # Where:
	I1213 18:35:57.242564   38829 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 18:35:57.242570   38829 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 18:35:57.242577   38829 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 18:35:57.242583   38829 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 18:35:57.242587   38829 command_runner.go:130] > #   in $PATH.
	I1213 18:35:57.242593   38829 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 18:35:57.242598   38829 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 18:35:57.242614   38829 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 18:35:57.242620   38829 command_runner.go:130] > #   state.
	I1213 18:35:57.242626   38829 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 18:35:57.242633   38829 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 18:35:57.242641   38829 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 18:35:57.242647   38829 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 18:35:57.242652   38829 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 18:35:57.242659   38829 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 18:35:57.242665   38829 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 18:35:57.242671   38829 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 18:35:57.242684   38829 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 18:35:57.242694   38829 command_runner.go:130] > #   The currently recognized values are:
	I1213 18:35:57.242701   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 18:35:57.242709   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 18:35:57.242718   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 18:35:57.242724   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 18:35:57.242736   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 18:35:57.242745   38829 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 18:35:57.242761   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 18:35:57.242774   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 18:35:57.242781   38829 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 18:35:57.242788   38829 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 18:35:57.242795   38829 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 18:35:57.242802   38829 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 18:35:57.242813   38829 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 18:35:57.242824   38829 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 18:35:57.242842   38829 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 18:35:57.242850   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 18:35:57.242861   38829 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 18:35:57.242865   38829 command_runner.go:130] > #   deprecated option "conmon".
	I1213 18:35:57.242873   38829 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 18:35:57.242881   38829 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 18:35:57.242888   38829 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 18:35:57.242894   38829 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 18:35:57.242911   38829 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 18:35:57.242917   38829 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 18:35:57.242924   38829 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 18:35:57.242933   38829 command_runner.go:130] > #   conmon-rs by using:
	I1213 18:35:57.242941   38829 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 18:35:57.242954   38829 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 18:35:57.242962   38829 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 18:35:57.242973   38829 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 18:35:57.242978   38829 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 18:35:57.242995   38829 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 18:35:57.243003   38829 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 18:35:57.243008   38829 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 18:35:57.243017   38829 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 18:35:57.243027   38829 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 18:35:57.243033   38829 command_runner.go:130] > #   when a machine crash happens.
	I1213 18:35:57.243040   38829 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 18:35:57.243049   38829 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 18:35:57.243065   38829 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 18:35:57.243070   38829 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 18:35:57.243076   38829 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 18:35:57.243084   38829 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 18:35:57.243094   38829 command_runner.go:130] > #
	I1213 18:35:57.243099   38829 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 18:35:57.243102   38829 command_runner.go:130] > #
	I1213 18:35:57.243113   38829 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 18:35:57.243123   38829 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 18:35:57.243126   38829 command_runner.go:130] > #
	I1213 18:35:57.243139   38829 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 18:35:57.243153   38829 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 18:35:57.243157   38829 command_runner.go:130] > #
	I1213 18:35:57.243163   38829 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 18:35:57.243170   38829 command_runner.go:130] > # feature.
	I1213 18:35:57.243173   38829 command_runner.go:130] > #
	I1213 18:35:57.243179   38829 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 18:35:57.243186   38829 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 18:35:57.243196   38829 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 18:35:57.243208   38829 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 18:35:57.243219   38829 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 18:35:57.243222   38829 command_runner.go:130] > #
	I1213 18:35:57.243229   38829 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 18:35:57.243235   38829 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 18:35:57.243256   38829 command_runner.go:130] > #
	I1213 18:35:57.243267   38829 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 18:35:57.243274   38829 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 18:35:57.243283   38829 command_runner.go:130] > #
	I1213 18:35:57.243294   38829 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 18:35:57.243301   38829 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 18:35:57.243304   38829 command_runner.go:130] > # limitation.
	I1213 18:35:57.243341   38829 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 18:35:57.243623   38829 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 18:35:57.243757   38829 command_runner.go:130] > runtime_type = ""
	I1213 18:35:57.244003   38829 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 18:35:57.244255   38829 command_runner.go:130] > inherit_default_runtime = false
	I1213 18:35:57.244399   38829 command_runner.go:130] > runtime_config_path = ""
	I1213 18:35:57.244539   38829 command_runner.go:130] > container_min_memory = ""
	I1213 18:35:57.244777   38829 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 18:35:57.245055   38829 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 18:35:57.245214   38829 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 18:35:57.245448   38829 command_runner.go:130] > allowed_annotations = [
	I1213 18:35:57.245605   38829 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 18:35:57.245830   38829 command_runner.go:130] > ]
	I1213 18:35:57.246064   38829 command_runner.go:130] > privileged_without_host_devices = false
	I1213 18:35:57.246554   38829 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 18:35:57.246808   38829 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 18:35:57.246935   38829 command_runner.go:130] > runtime_type = ""
	I1213 18:35:57.247167   38829 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 18:35:57.247404   38829 command_runner.go:130] > inherit_default_runtime = false
	I1213 18:35:57.247591   38829 command_runner.go:130] > runtime_config_path = ""
	I1213 18:35:57.247761   38829 command_runner.go:130] > container_min_memory = ""
	I1213 18:35:57.248046   38829 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 18:35:57.248332   38829 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 18:35:57.248492   38829 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 18:35:57.248957   38829 command_runner.go:130] > privileged_without_host_devices = false
	I1213 18:35:57.249339   38829 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 18:35:57.249353   38829 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 18:35:57.249360   38829 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 18:35:57.249369   38829 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 18:35:57.249380   38829 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 18:35:57.249391   38829 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 18:35:57.249420   38829 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 18:35:57.249432   38829 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 18:35:57.249442   38829 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 18:35:57.249454   38829 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 18:35:57.249460   38829 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 18:35:57.249474   38829 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 18:35:57.249483   38829 command_runner.go:130] > # Example:
	I1213 18:35:57.249488   38829 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 18:35:57.249494   38829 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 18:35:57.249507   38829 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 18:35:57.249513   38829 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 18:35:57.249522   38829 command_runner.go:130] > # cpuset = "0-1"
	I1213 18:35:57.249525   38829 command_runner.go:130] > # cpushares = "5"
	I1213 18:35:57.249529   38829 command_runner.go:130] > # cpuquota = "1000"
	I1213 18:35:57.249533   38829 command_runner.go:130] > # cpuperiod = "100000"
	I1213 18:35:57.249548   38829 command_runner.go:130] > # cpulimit = "35"
	I1213 18:35:57.249556   38829 command_runner.go:130] > # Where:
	I1213 18:35:57.249560   38829 command_runner.go:130] > # The workload name is workload-type.
	I1213 18:35:57.249568   38829 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 18:35:57.249574   38829 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 18:35:57.249585   38829 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 18:35:57.249594   38829 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 18:35:57.249604   38829 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 18:35:57.249739   38829 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 18:35:57.249752   38829 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 18:35:57.249757   38829 command_runner.go:130] > # Default value is set to true
	I1213 18:35:57.250196   38829 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 18:35:57.250210   38829 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 18:35:57.250216   38829 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 18:35:57.250220   38829 command_runner.go:130] > # Default value is set to 'false'
	I1213 18:35:57.250699   38829 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 18:35:57.250712   38829 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 18:35:57.250722   38829 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 18:35:57.251071   38829 command_runner.go:130] > # timezone = ""
	I1213 18:35:57.251082   38829 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 18:35:57.251086   38829 command_runner.go:130] > #
	I1213 18:35:57.251093   38829 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 18:35:57.251100   38829 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 18:35:57.251103   38829 command_runner.go:130] > [crio.image]
	I1213 18:35:57.251109   38829 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 18:35:57.251555   38829 command_runner.go:130] > # default_transport = "docker://"
	I1213 18:35:57.251569   38829 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 18:35:57.251576   38829 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 18:35:57.251964   38829 command_runner.go:130] > # global_auth_file = ""
	I1213 18:35:57.251977   38829 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 18:35:57.251982   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.252443   38829 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.252459   38829 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 18:35:57.252468   38829 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 18:35:57.252474   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.252817   38829 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 18:35:57.252830   38829 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 18:35:57.252837   38829 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 18:35:57.252844   38829 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 18:35:57.252849   38829 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 18:35:57.253309   38829 command_runner.go:130] > # pause_command = "/pause"
	I1213 18:35:57.253323   38829 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 18:35:57.253330   38829 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 18:35:57.253336   38829 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 18:35:57.253342   38829 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 18:35:57.253349   38829 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 18:35:57.253355   38829 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 18:35:57.253590   38829 command_runner.go:130] > # pinned_images = [
	I1213 18:35:57.253600   38829 command_runner.go:130] > # ]
	I1213 18:35:57.253607   38829 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 18:35:57.253614   38829 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 18:35:57.253621   38829 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 18:35:57.253627   38829 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 18:35:57.253636   38829 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 18:35:57.253910   38829 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 18:35:57.253925   38829 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 18:35:57.253939   38829 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 18:35:57.253949   38829 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 18:35:57.253960   38829 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 18:35:57.253967   38829 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 18:35:57.253980   38829 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 18:35:57.253986   38829 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 18:35:57.253995   38829 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 18:35:57.254000   38829 command_runner.go:130] > # changing them here.
	I1213 18:35:57.254012   38829 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 18:35:57.254016   38829 command_runner.go:130] > # insecure_registries = [
	I1213 18:35:57.254268   38829 command_runner.go:130] > # ]
	I1213 18:35:57.254281   38829 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 18:35:57.254287   38829 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 18:35:57.254424   38829 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 18:35:57.254436   38829 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 18:35:57.254580   38829 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 18:35:57.254592   38829 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 18:35:57.254600   38829 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 18:35:57.254897   38829 command_runner.go:130] > # auto_reload_registries = false
	I1213 18:35:57.254910   38829 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 18:35:57.254920   38829 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 18:35:57.254926   38829 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 18:35:57.254930   38829 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 18:35:57.254935   38829 command_runner.go:130] > # The mode of short name resolution.
	I1213 18:35:57.254941   38829 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 18:35:57.254949   38829 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 18:35:57.254965   38829 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 18:35:57.254970   38829 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 18:35:57.254982   38829 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 18:35:57.254988   38829 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 18:35:57.255234   38829 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 18:35:57.255247   38829 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 18:35:57.255251   38829 command_runner.go:130] > # CNI plugins.
	I1213 18:35:57.255254   38829 command_runner.go:130] > [crio.network]
	I1213 18:35:57.255260   38829 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 18:35:57.255266   38829 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 18:35:57.255275   38829 command_runner.go:130] > # cni_default_network = ""
	I1213 18:35:57.255283   38829 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 18:35:57.255416   38829 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 18:35:57.255429   38829 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 18:35:57.255573   38829 command_runner.go:130] > # plugin_dirs = [
	I1213 18:35:57.255807   38829 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 18:35:57.255816   38829 command_runner.go:130] > # ]
	I1213 18:35:57.255821   38829 command_runner.go:130] > # List of included pod metrics.
	I1213 18:35:57.255825   38829 command_runner.go:130] > # included_pod_metrics = [
	I1213 18:35:57.255828   38829 command_runner.go:130] > # ]
	I1213 18:35:57.255834   38829 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 18:35:57.255838   38829 command_runner.go:130] > [crio.metrics]
	I1213 18:35:57.255843   38829 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 18:35:57.255847   38829 command_runner.go:130] > # enable_metrics = false
	I1213 18:35:57.255851   38829 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 18:35:57.255867   38829 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 18:35:57.255879   38829 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 18:35:57.255889   38829 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 18:35:57.255900   38829 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 18:35:57.255905   38829 command_runner.go:130] > # metrics_collectors = [
	I1213 18:35:57.256016   38829 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 18:35:57.256027   38829 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 18:35:57.256031   38829 command_runner.go:130] > # 	"containers_oom_total",
	I1213 18:35:57.256331   38829 command_runner.go:130] > # 	"processes_defunct",
	I1213 18:35:57.256341   38829 command_runner.go:130] > # 	"operations_total",
	I1213 18:35:57.256346   38829 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 18:35:57.256351   38829 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 18:35:57.256361   38829 command_runner.go:130] > # 	"operations_errors_total",
	I1213 18:35:57.256365   38829 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 18:35:57.256370   38829 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 18:35:57.256374   38829 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 18:35:57.257117   38829 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 18:35:57.257132   38829 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 18:35:57.257137   38829 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 18:35:57.257143   38829 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 18:35:57.257155   38829 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 18:35:57.257161   38829 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 18:35:57.257170   38829 command_runner.go:130] > # ]
	I1213 18:35:57.257177   38829 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 18:35:57.257185   38829 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 18:35:57.257191   38829 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 18:35:57.257199   38829 command_runner.go:130] > # metrics_port = 9090
	I1213 18:35:57.257204   38829 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 18:35:57.257212   38829 command_runner.go:130] > # metrics_socket = ""
	I1213 18:35:57.257233   38829 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 18:35:57.257245   38829 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 18:35:57.257252   38829 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 18:35:57.257260   38829 command_runner.go:130] > # certificate on any modification event.
	I1213 18:35:57.257270   38829 command_runner.go:130] > # metrics_cert = ""
	I1213 18:35:57.257276   38829 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 18:35:57.257285   38829 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 18:35:57.257289   38829 command_runner.go:130] > # metrics_key = ""
	I1213 18:35:57.257299   38829 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 18:35:57.257318   38829 command_runner.go:130] > [crio.tracing]
	I1213 18:35:57.257325   38829 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 18:35:57.257329   38829 command_runner.go:130] > # enable_tracing = false
	I1213 18:35:57.257339   38829 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 18:35:57.257343   38829 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 18:35:57.257354   38829 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 18:35:57.257366   38829 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 18:35:57.257381   38829 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 18:35:57.257393   38829 command_runner.go:130] > [crio.nri]
	I1213 18:35:57.257402   38829 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 18:35:57.257406   38829 command_runner.go:130] > # enable_nri = true
	I1213 18:35:57.257410   38829 command_runner.go:130] > # NRI socket to listen on.
	I1213 18:35:57.257415   38829 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 18:35:57.257423   38829 command_runner.go:130] > # NRI plugin directory to use.
	I1213 18:35:57.257428   38829 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 18:35:57.257437   38829 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 18:35:57.257442   38829 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 18:35:57.257457   38829 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 18:35:57.257514   38829 command_runner.go:130] > # nri_disable_connections = false
	I1213 18:35:57.257530   38829 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 18:35:57.257535   38829 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 18:35:57.257544   38829 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 18:35:57.257549   38829 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 18:35:57.257558   38829 command_runner.go:130] > # NRI default validator configuration.
	I1213 18:35:57.257566   38829 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 18:35:57.257576   38829 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 18:35:57.257584   38829 command_runner.go:130] > # can be restricted/rejected:
	I1213 18:35:57.257588   38829 command_runner.go:130] > # - OCI hook injection
	I1213 18:35:57.257597   38829 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 18:35:57.257609   38829 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 18:35:57.257615   38829 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 18:35:57.257624   38829 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 18:35:57.257632   38829 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 18:35:57.257642   38829 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 18:35:57.257652   38829 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 18:35:57.257660   38829 command_runner.go:130] > #
	I1213 18:35:57.257664   38829 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 18:35:57.257672   38829 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 18:35:57.257686   38829 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 18:35:57.257692   38829 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 18:35:57.257699   38829 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 18:35:57.257712   38829 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 18:35:57.257721   38829 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 18:35:57.257726   38829 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 18:35:57.257732   38829 command_runner.go:130] > # ]
	I1213 18:35:57.257738   38829 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 18:35:57.257747   38829 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 18:35:57.257763   38829 command_runner.go:130] > [crio.stats]
	I1213 18:35:57.257772   38829 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 18:35:57.257778   38829 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 18:35:57.257782   38829 command_runner.go:130] > # stats_collection_period = 0
	I1213 18:35:57.257792   38829 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 18:35:57.257800   38829 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 18:35:57.257809   38829 command_runner.go:130] > # collection_period = 0
	I1213 18:35:57.259571   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.21464252Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 18:35:57.259589   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214677794Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 18:35:57.259613   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214706635Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 18:35:57.259625   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.21473084Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 18:35:57.259635   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214801782Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:57.259643   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.215251382Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 18:35:57.259658   38829 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 18:35:57.259749   38829 cni.go:84] Creating CNI manager for ""
	I1213 18:35:57.259765   38829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:35:57.259800   38829 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 18:35:57.259831   38829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-752103 NodeName:functional-752103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 18:35:57.259972   38829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-752103"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 18:35:57.260053   38829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 18:35:57.267743   38829 command_runner.go:130] > kubeadm
	I1213 18:35:57.267764   38829 command_runner.go:130] > kubectl
	I1213 18:35:57.267769   38829 command_runner.go:130] > kubelet
	I1213 18:35:57.268114   38829 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 18:35:57.268211   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 18:35:57.275739   38829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 18:35:57.288967   38829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 18:35:57.301790   38829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 18:35:57.314673   38829 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 18:35:57.318486   38829 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 18:35:57.318580   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:57.437137   38829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:35:57.456752   38829 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103 for IP: 192.168.49.2
	I1213 18:35:57.456776   38829 certs.go:195] generating shared ca certs ...
	I1213 18:35:57.456809   38829 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:57.456950   38829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 18:35:57.457003   38829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 18:35:57.457091   38829 certs.go:257] generating profile certs ...
	I1213 18:35:57.457200   38829 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key
	I1213 18:35:57.457253   38829 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key.597c6026
	I1213 18:35:57.457304   38829 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key
	I1213 18:35:57.457312   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 18:35:57.457324   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 18:35:57.457340   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 18:35:57.457356   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 18:35:57.457367   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 18:35:57.457383   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 18:35:57.457395   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 18:35:57.457405   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 18:35:57.457457   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 18:35:57.457490   38829 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 18:35:57.457499   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 18:35:57.457529   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 18:35:57.457562   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 18:35:57.457593   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 18:35:57.457644   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:35:57.457676   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.457691   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.457705   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.458319   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 18:35:57.479443   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 18:35:57.498974   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 18:35:57.520210   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 18:35:57.540966   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 18:35:57.558774   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 18:35:57.576442   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 18:35:57.593767   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 18:35:57.611061   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 18:35:57.628952   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 18:35:57.646627   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 18:35:57.664290   38829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 18:35:57.677693   38829 ssh_runner.go:195] Run: openssl version
	I1213 18:35:57.683465   38829 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 18:35:57.683918   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.691710   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 18:35:57.699237   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.702943   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.702972   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.703038   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.743436   38829 command_runner.go:130] > 51391683
	I1213 18:35:57.743914   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 18:35:57.751320   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.758498   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 18:35:57.765907   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769321   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769343   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769391   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.809666   38829 command_runner.go:130] > 3ec20f2e
	I1213 18:35:57.810146   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 18:35:57.818335   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.826660   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 18:35:57.834746   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838666   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838764   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838851   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.879619   38829 command_runner.go:130] > b5213941
	I1213 18:35:57.880088   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 18:35:57.887654   38829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:35:57.891412   38829 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:35:57.891437   38829 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 18:35:57.891445   38829 command_runner.go:130] > Device: 259,1	Inode: 1056084     Links: 1
	I1213 18:35:57.891452   38829 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 18:35:57.891459   38829 command_runner.go:130] > Access: 2025-12-13 18:31:50.964784337 +0000
	I1213 18:35:57.891465   38829 command_runner.go:130] > Modify: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891470   38829 command_runner.go:130] > Change: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891475   38829 command_runner.go:130] >  Birth: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891539   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 18:35:57.937033   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:57.937482   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 18:35:57.978137   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:57.978564   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 18:35:58.033951   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.034441   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 18:35:58.075936   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.076412   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 18:35:58.118212   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.118338   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 18:35:58.159347   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.159444   38829 kubeadm.go:401] StartCluster: {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:58.159559   38829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:35:58.159642   38829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:35:58.186428   38829 cri.go:89] found id: ""
	I1213 18:35:58.186502   38829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 18:35:58.193645   38829 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 18:35:58.193670   38829 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 18:35:58.193678   38829 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 18:35:58.194604   38829 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 18:35:58.194674   38829 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 18:35:58.194749   38829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 18:35:58.202237   38829 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:35:58.202735   38829 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-752103" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.202850   38829 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-2686/kubeconfig needs updating (will repair): [kubeconfig missing "functional-752103" cluster setting kubeconfig missing "functional-752103" context setting]
	I1213 18:35:58.203123   38829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.203546   38829 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.203705   38829 kapi.go:59] client config for functional-752103: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 18:35:58.204223   38829 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 18:35:58.204247   38829 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 18:35:58.204258   38829 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 18:35:58.204263   38829 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 18:35:58.204267   38829 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 18:35:58.204300   38829 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 18:35:58.204536   38829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 18:35:58.212005   38829 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 18:35:58.212037   38829 kubeadm.go:602] duration metric: took 17.346627ms to restartPrimaryControlPlane
	I1213 18:35:58.212045   38829 kubeadm.go:403] duration metric: took 52.608163ms to StartCluster
	I1213 18:35:58.212060   38829 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.212116   38829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.212712   38829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.212903   38829 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 18:35:58.213488   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:58.213543   38829 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 18:35:58.213607   38829 addons.go:70] Setting storage-provisioner=true in profile "functional-752103"
	I1213 18:35:58.213620   38829 addons.go:239] Setting addon storage-provisioner=true in "functional-752103"
	I1213 18:35:58.213643   38829 host.go:66] Checking if "functional-752103" exists ...
	I1213 18:35:58.214229   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.214390   38829 addons.go:70] Setting default-storageclass=true in profile "functional-752103"
	I1213 18:35:58.214412   38829 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-752103"
	I1213 18:35:58.214713   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.219256   38829 out.go:179] * Verifying Kubernetes components...
	I1213 18:35:58.222143   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:58.244199   38829 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 18:35:58.247016   38829 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:58.247042   38829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 18:35:58.247112   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:58.257520   38829 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.257687   38829 kapi.go:59] client config for functional-752103: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 18:35:58.257971   38829 addons.go:239] Setting addon default-storageclass=true in "functional-752103"
	I1213 18:35:58.258004   38829 host.go:66] Checking if "functional-752103" exists ...
	I1213 18:35:58.258425   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.277237   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:58.306835   38829 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:58.306855   38829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 18:35:58.306918   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:58.340724   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:58.416694   38829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:35:58.451165   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:58.493354   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.080268   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.080307   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080337   38829 retry.go:31] will retry after 153.209012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080385   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.080398   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080404   38829 retry.go:31] will retry after 291.62792ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080464   38829 node_ready.go:35] waiting up to 6m0s for node "functional-752103" to be "Ready" ...
	I1213 18:35:59.080578   38829 type.go:168] "Request Body" body=""
	I1213 18:35:59.080656   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:35:59.080963   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:35:59.234362   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:59.300149   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.300200   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.300219   38829 retry.go:31] will retry after 511.331502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.372301   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.426538   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.430102   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.430132   38829 retry.go:31] will retry after 426.700032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.581486   38829 type.go:168] "Request Body" body=""
	I1213 18:35:59.581586   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:35:59.581963   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:35:59.812414   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:59.857973   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.893611   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.893688   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.893723   38829 retry.go:31] will retry after 310.068383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.947559   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.947617   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.947640   38829 retry.go:31] will retry after 829.65637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.080795   38829 type.go:168] "Request Body" body=""
	I1213 18:36:00.080875   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:00.081240   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:00.205923   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:00.416702   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:00.416818   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.416873   38829 retry.go:31] will retry after 579.133816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.581369   38829 type.go:168] "Request Body" body=""
	I1213 18:36:00.581557   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:00.582010   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:00.778452   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:00.837536   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:00.837585   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.837604   38829 retry.go:31] will retry after 974.075863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.996954   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:01.059672   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:01.059714   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.059763   38829 retry.go:31] will retry after 1.136000803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.080856   38829 type.go:168] "Request Body" body=""
	I1213 18:36:01.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:01.081261   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:01.081306   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:01.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:36:01.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:01.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:01.812632   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:01.883701   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:01.883803   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.883825   38829 retry.go:31] will retry after 921.808005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.081109   38829 type.go:168] "Request Body" body=""
	I1213 18:36:02.081198   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:02.081477   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:02.196877   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:02.253907   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:02.257605   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.257637   38829 retry.go:31] will retry after 1.546462752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.581141   38829 type.go:168] "Request Body" body=""
	I1213 18:36:02.581286   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:02.581677   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:02.805901   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:02.889297   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:02.893182   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.893216   38829 retry.go:31] will retry after 1.247577285s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:03.081687   38829 type.go:168] "Request Body" body=""
	I1213 18:36:03.081764   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:03.082108   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:03.082162   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:03.580643   38829 type.go:168] "Request Body" body=""
	I1213 18:36:03.580714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:03.580995   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:03.804445   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:03.865304   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:03.865353   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:03.865372   38829 retry.go:31] will retry after 3.450909707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.080758   38829 type.go:168] "Request Body" body=""
	I1213 18:36:04.080837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:04.081202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:04.141517   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:04.204625   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:04.204670   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.204689   38829 retry.go:31] will retry after 3.409599879s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.581166   38829 type.go:168] "Request Body" body=""
	I1213 18:36:04.581250   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:04.581566   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:05.081373   38829 type.go:168] "Request Body" body=""
	I1213 18:36:05.081443   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:05.081739   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:05.581581   38829 type.go:168] "Request Body" body=""
	I1213 18:36:05.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:05.581992   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:05.582049   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:06.080707   38829 type.go:168] "Request Body" body=""
	I1213 18:36:06.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:06.081099   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:06.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:36:06.580849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:06.581220   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:36:07.080806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:07.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.316533   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:07.393411   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:07.397246   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.397278   38829 retry.go:31] will retry after 2.442447522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.581582   38829 type.go:168] "Request Body" body=""
	I1213 18:36:07.581660   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:07.582007   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.615412   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:07.670357   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:07.674453   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.674491   38829 retry.go:31] will retry after 4.254133001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:08.080696   38829 type.go:168] "Request Body" body=""
	I1213 18:36:08.080805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:08.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:08.081221   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:08.581149   38829 type.go:168] "Request Body" body=""
	I1213 18:36:08.581249   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:08.581593   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.081583   38829 type.go:168] "Request Body" body=""
	I1213 18:36:09.081656   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:09.081980   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.581654   38829 type.go:168] "Request Body" body=""
	I1213 18:36:09.581729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:09.582054   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.840484   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:09.900307   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:09.900343   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:09.900361   38829 retry.go:31] will retry after 4.640117862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:10.081715   38829 type.go:168] "Request Body" body=""
	I1213 18:36:10.081794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:10.082116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:10.082183   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:10.580872   38829 type.go:168] "Request Body" body=""
	I1213 18:36:10.580959   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:10.581373   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.080692   38829 type.go:168] "Request Body" body=""
	I1213 18:36:11.080776   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:11.081115   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.580824   38829 type.go:168] "Request Body" body=""
	I1213 18:36:11.580896   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:11.581249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.928812   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:11.987432   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:11.987481   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:11.987500   38829 retry.go:31] will retry after 8.251825899s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:12.081733   38829 type.go:168] "Request Body" body=""
	I1213 18:36:12.081819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:12.082391   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:12.082470   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:12.580663   38829 type.go:168] "Request Body" body=""
	I1213 18:36:12.580742   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:12.581100   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:13.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:36:13.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:13.081119   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:13.580828   38829 type.go:168] "Request Body" body=""
	I1213 18:36:13.580900   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:13.581257   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:14.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:36:14.081075   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:14.081364   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:14.540746   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:14.581321   38829 type.go:168] "Request Body" body=""
	I1213 18:36:14.581395   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:14.581672   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:14.581722   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:14.600534   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:14.600587   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:14.600605   38829 retry.go:31] will retry after 8.957681085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:15.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:36:15.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:15.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:15.580789   38829 type.go:168] "Request Body" body=""
	I1213 18:36:15.580868   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:15.581235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:16.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:36:16.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:16.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:16.580886   38829 type.go:168] "Request Body" body=""
	I1213 18:36:16.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:16.581330   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:17.081614   38829 type.go:168] "Request Body" body=""
	I1213 18:36:17.081684   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:17.081955   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:17.081995   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:17.580662   38829 type.go:168] "Request Body" body=""
	I1213 18:36:17.580732   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:17.581063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:18.080650   38829 type.go:168] "Request Body" body=""
	I1213 18:36:18.080721   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:18.081108   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:18.580672   38829 type.go:168] "Request Body" body=""
	I1213 18:36:18.580742   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:18.581079   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:19.081047   38829 type.go:168] "Request Body" body=""
	I1213 18:36:19.081115   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:19.081424   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:19.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:36:19.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:19.581191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:19.581284   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:20.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:36:20.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:20.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:20.239601   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:20.301361   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:20.301401   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:20.301420   38829 retry.go:31] will retry after 6.59814029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:20.580747   38829 type.go:168] "Request Body" body=""
	I1213 18:36:20.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:20.581125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:21.080844   38829 type.go:168] "Request Body" body=""
	I1213 18:36:21.080933   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:21.081353   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:21.580686   38829 type.go:168] "Request Body" body=""
	I1213 18:36:21.580762   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:21.581080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:22.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:36:22.080884   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:22.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:22.081274   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:22.580705   38829 type.go:168] "Request Body" body=""
	I1213 18:36:22.580799   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:22.581136   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.080675   38829 type.go:168] "Request Body" body=""
	I1213 18:36:23.080747   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:23.081137   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.558605   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:23.581258   38829 type.go:168] "Request Body" body=""
	I1213 18:36:23.581331   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:23.581605   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.617607   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:23.617653   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:23.617671   38829 retry.go:31] will retry after 14.669686806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:24.081419   38829 type.go:168] "Request Body" body=""
	I1213 18:36:24.081508   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:24.081878   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:24.081930   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:24.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:36:24.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:24.581024   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:25.080794   38829 type.go:168] "Request Body" body=""
	I1213 18:36:25.080880   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:25.081347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:25.580742   38829 type.go:168] "Request Body" body=""
	I1213 18:36:25.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:25.581207   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:26.080781   38829 type.go:168] "Request Body" body=""
	I1213 18:36:26.080854   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:26.081166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:26.580764   38829 type.go:168] "Request Body" body=""
	I1213 18:36:26.580862   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:26.581247   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:26.581300   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:26.900727   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:26.960607   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:26.960668   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:26.960687   38829 retry.go:31] will retry after 15.397640826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:27.080883   38829 type.go:168] "Request Body" body=""
	I1213 18:36:27.080957   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:27.081297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:27.580637   38829 type.go:168] "Request Body" body=""
	I1213 18:36:27.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:27.580956   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:28.080641   38829 type.go:168] "Request Body" body=""
	I1213 18:36:28.080752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:28.081081   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:28.580963   38829 type.go:168] "Request Body" body=""
	I1213 18:36:28.581049   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:28.581366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:28.581418   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:29.081265   38829 type.go:168] "Request Body" body=""
	I1213 18:36:29.081330   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:29.081585   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:29.581341   38829 type.go:168] "Request Body" body=""
	I1213 18:36:29.581414   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:29.581724   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:30.083283   38829 type.go:168] "Request Body" body=""
	I1213 18:36:30.083370   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:30.083708   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:30.581559   38829 type.go:168] "Request Body" body=""
	I1213 18:36:30.581633   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:30.581902   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:30.581946   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:31.081665   38829 type.go:168] "Request Body" body=""
	I1213 18:36:31.081736   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:31.082102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:31.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:36:31.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:31.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:32.080588   38829 type.go:168] "Request Body" body=""
	I1213 18:36:32.080654   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:32.080909   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:32.581657   38829 type.go:168] "Request Body" body=""
	I1213 18:36:32.581734   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:32.582056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:32.582116   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:33.080787   38829 type.go:168] "Request Body" body=""
	I1213 18:36:33.080867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:33.081206   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:33.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:36:33.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:33.580998   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:34.080961   38829 type.go:168] "Request Body" body=""
	I1213 18:36:34.081065   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:34.081433   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:34.581228   38829 type.go:168] "Request Body" body=""
	I1213 18:36:34.581300   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:34.581636   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:35.081408   38829 type.go:168] "Request Body" body=""
	I1213 18:36:35.081478   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:35.081747   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:35.081790   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:35.581492   38829 type.go:168] "Request Body" body=""
	I1213 18:36:35.581568   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:35.581859   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:36.081553   38829 type.go:168] "Request Body" body=""
	I1213 18:36:36.081623   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:36.081928   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:36.581632   38829 type.go:168] "Request Body" body=""
	I1213 18:36:36.581711   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:36.582018   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:37.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:36:37.080804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:37.081189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:37.580917   38829 type.go:168] "Request Body" body=""
	I1213 18:36:37.580993   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:37.581352   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:37.581446   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:38.080688   38829 type.go:168] "Request Body" body=""
	I1213 18:36:38.080770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:38.081101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:38.287495   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:38.357240   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:38.360822   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:38.360853   38829 retry.go:31] will retry after 30.28485436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:38.581302   38829 type.go:168] "Request Body" body=""
	I1213 18:36:38.581374   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:38.581695   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:39.081218   38829 type.go:168] "Request Body" body=""
	I1213 18:36:39.081295   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:39.081664   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:39.581465   38829 type.go:168] "Request Body" body=""
	I1213 18:36:39.581533   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:39.581794   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:39.581852   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:40.081640   38829 type.go:168] "Request Body" body=""
	I1213 18:36:40.081724   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:40.082071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:40.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:36:40.580788   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:40.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:41.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:36:41.080801   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:41.081086   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:41.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:36:41.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:41.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:42.080831   38829 type.go:168] "Request Body" body=""
	I1213 18:36:42.080909   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:42.081302   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:42.081363   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:42.358603   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:42.430743   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:42.430803   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:42.430822   38829 retry.go:31] will retry after 12.093455046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:42.581106   38829 type.go:168] "Request Body" body=""
	I1213 18:36:42.581178   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:42.581444   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:43.081272   38829 type.go:168] "Request Body" body=""
	I1213 18:36:43.081354   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:43.081648   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:43.580658   38829 type.go:168] "Request Body" body=""
	I1213 18:36:43.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:43.581055   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:44.080685   38829 type.go:168] "Request Body" body=""
	I1213 18:36:44.080795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:44.081152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:44.580685   38829 type.go:168] "Request Body" body=""
	I1213 18:36:44.580759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:44.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:44.581161   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:45.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:36:45.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:45.081226   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:45.581071   38829 type.go:168] "Request Body" body=""
	I1213 18:36:45.581137   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:45.581415   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:46.081136   38829 type.go:168] "Request Body" body=""
	I1213 18:36:46.081217   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:46.081567   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:46.581397   38829 type.go:168] "Request Body" body=""
	I1213 18:36:46.581468   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:46.581797   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:46.581852   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:47.081586   38829 type.go:168] "Request Body" body=""
	I1213 18:36:47.081660   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:47.081917   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:47.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:36:47.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:47.581109   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:48.080824   38829 type.go:168] "Request Body" body=""
	I1213 18:36:48.080903   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:48.081209   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:48.581175   38829 type.go:168] "Request Body" body=""
	I1213 18:36:48.581241   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:48.581504   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:49.081596   38829 type.go:168] "Request Body" body=""
	I1213 18:36:49.081669   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:49.082029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:49.082084   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:49.580622   38829 type.go:168] "Request Body" body=""
	I1213 18:36:49.580704   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:49.581055   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:50.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:36:50.080823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:50.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:50.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:36:50.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:50.581174   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:51.080882   38829 type.go:168] "Request Body" body=""
	I1213 18:36:51.080963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:51.081341   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:51.580687   38829 type.go:168] "Request Body" body=""
	I1213 18:36:51.580761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:51.581057   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:51.581110   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:52.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:36:52.080817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:52.081192   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:52.580893   38829 type.go:168] "Request Body" body=""
	I1213 18:36:52.580986   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:52.581347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:53.080709   38829 type.go:168] "Request Body" body=""
	I1213 18:36:53.080779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:53.081063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:53.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:36:53.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:53.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:53.581240   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:54.081104   38829 type.go:168] "Request Body" body=""
	I1213 18:36:54.081173   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:54.081470   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:54.525326   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:54.580832   38829 type.go:168] "Request Body" body=""
	I1213 18:36:54.580898   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:54.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:54.600652   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:54.600694   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:54.600713   38829 retry.go:31] will retry after 41.212755678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:55.081498   38829 type.go:168] "Request Body" body=""
	I1213 18:36:55.081571   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:55.081915   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:55.580632   38829 type.go:168] "Request Body" body=""
	I1213 18:36:55.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:55.581066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:56.080716   38829 type.go:168] "Request Body" body=""
	I1213 18:36:56.080780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:56.081078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:56.081124   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:56.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:36:56.580847   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:56.581215   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:57.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:36:57.080904   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:57.081246   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:57.580702   38829 type.go:168] "Request Body" body=""
	I1213 18:36:57.580781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:57.581095   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:58.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:36:58.080815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:58.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:58.081230   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:58.580804   38829 type.go:168] "Request Body" body=""
	I1213 18:36:58.580886   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:58.581230   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:59.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:36:59.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:59.081167   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:59.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:36:59.580848   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:59.581262   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:00.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:37:00.081091   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:00.081411   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:00.081460   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:00.580690   38829 type.go:168] "Request Body" body=""
	I1213 18:37:00.580766   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:00.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:01.080673   38829 type.go:168] "Request Body" body=""
	I1213 18:37:01.080760   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:01.081112   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:01.580720   38829 type.go:168] "Request Body" body=""
	I1213 18:37:01.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:01.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:02.080753   38829 type.go:168] "Request Body" body=""
	I1213 18:37:02.080821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:02.081110   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:02.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:37:02.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:02.581155   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:02.581205   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:03.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:37:03.080823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:03.081153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:03.580615   38829 type.go:168] "Request Body" body=""
	I1213 18:37:03.580691   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:03.580974   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:04.080845   38829 type.go:168] "Request Body" body=""
	I1213 18:37:04.080916   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:04.081330   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:04.580902   38829 type.go:168] "Request Body" body=""
	I1213 18:37:04.581002   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:04.581380   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:04.581437   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:05.080788   38829 type.go:168] "Request Body" body=""
	I1213 18:37:05.080867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:05.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:05.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:37:05.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:05.581178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:06.080721   38829 type.go:168] "Request Body" body=""
	I1213 18:37:06.080796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:06.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:06.580658   38829 type.go:168] "Request Body" body=""
	I1213 18:37:06.580727   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:06.581063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:07.080796   38829 type.go:168] "Request Body" body=""
	I1213 18:37:07.080883   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:07.081219   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:07.081280   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:07.580756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:07.580835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:07.581166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.080678   38829 type.go:168] "Request Body" body=""
	I1213 18:37:08.080757   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:08.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.580840   38829 type.go:168] "Request Body" body=""
	I1213 18:37:08.580922   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:08.581286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.646539   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:37:08.707161   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:08.707197   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:37:08.707216   38829 retry.go:31] will retry after 43.904706278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:37:09.080730   38829 type.go:168] "Request Body" body=""
	I1213 18:37:09.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:09.081148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:09.580688   38829 type.go:168] "Request Body" body=""
	I1213 18:37:09.580756   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:09.581080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:09.581129   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:10.080738   38829 type.go:168] "Request Body" body=""
	I1213 18:37:10.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:10.081184   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:10.580752   38829 type.go:168] "Request Body" body=""
	I1213 18:37:10.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:10.581212   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:11.080819   38829 type.go:168] "Request Body" body=""
	I1213 18:37:11.080905   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:11.081275   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:11.580750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:11.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:11.581167   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:11.581218   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:12.080976   38829 type.go:168] "Request Body" body=""
	I1213 18:37:12.081075   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:12.081413   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:12.581163   38829 type.go:168] "Request Body" body=""
	I1213 18:37:12.581239   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:12.581504   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:13.081350   38829 type.go:168] "Request Body" body=""
	I1213 18:37:13.081422   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:13.081759   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:13.581540   38829 type.go:168] "Request Body" body=""
	I1213 18:37:13.581621   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:13.581958   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:13.582012   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:14.080637   38829 type.go:168] "Request Body" body=""
	I1213 18:37:14.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:14.081037   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:14.580751   38829 type.go:168] "Request Body" body=""
	I1213 18:37:14.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:14.581126   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:15.080809   38829 type.go:168] "Request Body" body=""
	I1213 18:37:15.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:15.081289   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:15.580701   38829 type.go:168] "Request Body" body=""
	I1213 18:37:15.580784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:15.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:16.080844   38829 type.go:168] "Request Body" body=""
	I1213 18:37:16.080922   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:16.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:16.081285   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:16.580898   38829 type.go:168] "Request Body" body=""
	I1213 18:37:16.581034   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:16.581399   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:17.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:17.080737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:17.080990   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:17.580692   38829 type.go:168] "Request Body" body=""
	I1213 18:37:17.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:17.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:18.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:18.080868   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:18.081221   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:18.581194   38829 type.go:168] "Request Body" body=""
	I1213 18:37:18.581282   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:18.581589   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:18.581661   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:19.080720   38829 type.go:168] "Request Body" body=""
	I1213 18:37:19.080794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:19.081153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:19.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:37:19.580807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:19.581139   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:20.080683   38829 type.go:168] "Request Body" body=""
	I1213 18:37:20.080783   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:20.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:20.580699   38829 type.go:168] "Request Body" body=""
	I1213 18:37:20.580768   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:20.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:21.080704   38829 type.go:168] "Request Body" body=""
	I1213 18:37:21.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:21.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:21.081200   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:21.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:37:21.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:21.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:22.080770   38829 type.go:168] "Request Body" body=""
	I1213 18:37:22.080878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:22.081249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:22.580823   38829 type.go:168] "Request Body" body=""
	I1213 18:37:22.580919   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:22.581227   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:23.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:37:23.080740   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:23.081069   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:23.580725   38829 type.go:168] "Request Body" body=""
	I1213 18:37:23.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:23.581144   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:23.581194   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:24.081109   38829 type.go:168] "Request Body" body=""
	I1213 18:37:24.081180   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:24.081522   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:24.581618   38829 type.go:168] "Request Body" body=""
	I1213 18:37:24.581687   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:24.582010   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:25.080756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:25.080839   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:25.081197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:25.580943   38829 type.go:168] "Request Body" body=""
	I1213 18:37:25.581038   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:25.581354   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:25.581416   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:26.080723   38829 type.go:168] "Request Body" body=""
	I1213 18:37:26.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:26.081227   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:26.580735   38829 type.go:168] "Request Body" body=""
	I1213 18:37:26.580817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:26.581160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:27.080700   38829 type.go:168] "Request Body" body=""
	I1213 18:37:27.080784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:27.081126   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:27.580667   38829 type.go:168] "Request Body" body=""
	I1213 18:37:27.580751   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:27.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:28.080604   38829 type.go:168] "Request Body" body=""
	I1213 18:37:28.080698   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:28.081045   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:28.081097   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:28.580817   38829 type.go:168] "Request Body" body=""
	I1213 18:37:28.580906   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:28.581222   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:29.080796   38829 type.go:168] "Request Body" body=""
	I1213 18:37:29.080873   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:29.081151   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:29.580777   38829 type.go:168] "Request Body" body=""
	I1213 18:37:29.580870   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:29.581199   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:30.080803   38829 type.go:168] "Request Body" body=""
	I1213 18:37:30.080884   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:30.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:30.081287   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:30.580672   38829 type.go:168] "Request Body" body=""
	I1213 18:37:30.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:30.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:31.081506   38829 type.go:168] "Request Body" body=""
	I1213 18:37:31.081581   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:31.081922   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:31.580645   38829 type.go:168] "Request Body" body=""
	I1213 18:37:31.580718   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:31.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:32.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:32.080783   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:32.081114   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:32.580825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:32.580936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:32.581248   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:32.581295   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:33.080746   38829 type.go:168] "Request Body" body=""
	I1213 18:37:33.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:33.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:33.580676   38829 type.go:168] "Request Body" body=""
	I1213 18:37:33.580750   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:33.581029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:34.081646   38829 type.go:168] "Request Body" body=""
	I1213 18:37:34.081715   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:34.082009   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:34.580682   38829 type.go:168] "Request Body" body=""
	I1213 18:37:34.580780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:34.581134   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:35.080825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:35.080895   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:35.081246   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:35.081298   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:35.580940   38829 type.go:168] "Request Body" body=""
	I1213 18:37:35.581051   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:35.581350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:35.813701   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:37:35.887144   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:35.887179   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:35.887279   38829 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 18:37:36.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:36.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:36.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:36.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:37:36.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:36.581058   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:37.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:37:37.080814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:37.081161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:37.580851   38829 type.go:168] "Request Body" body=""
	I1213 18:37:37.580926   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:37.581239   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:37.581288   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:38.080774   38829 type.go:168] "Request Body" body=""
	I1213 18:37:38.080865   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:38.081305   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:38.581237   38829 type.go:168] "Request Body" body=""
	I1213 18:37:38.581321   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:38.581645   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:39.081533   38829 type.go:168] "Request Body" body=""
	I1213 18:37:39.081612   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:39.081897   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:39.581503   38829 type.go:168] "Request Body" body=""
	I1213 18:37:39.581567   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:39.581828   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:39.581866   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:40.081636   38829 type.go:168] "Request Body" body=""
	I1213 18:37:40.081710   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:40.082035   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:40.580686   38829 type.go:168] "Request Body" body=""
	I1213 18:37:40.580764   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:40.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:41.080659   38829 type.go:168] "Request Body" body=""
	I1213 18:37:41.080744   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:41.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:41.580856   38829 type.go:168] "Request Body" body=""
	I1213 18:37:41.580929   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:41.581268   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:42.080912   38829 type.go:168] "Request Body" body=""
	I1213 18:37:42.081054   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:42.081405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:42.081473   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:42.581188   38829 type.go:168] "Request Body" body=""
	I1213 18:37:42.581268   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:42.581539   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:43.081397   38829 type.go:168] "Request Body" body=""
	I1213 18:37:43.081474   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:43.081823   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:43.581624   38829 type.go:168] "Request Body" body=""
	I1213 18:37:43.581704   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:43.582019   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:44.081168   38829 type.go:168] "Request Body" body=""
	I1213 18:37:44.081243   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:44.081539   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:44.081581   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:44.581405   38829 type.go:168] "Request Body" body=""
	I1213 18:37:44.581481   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:44.581805   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:45.081836   38829 type.go:168] "Request Body" body=""
	I1213 18:37:45.081938   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:45.082358   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:45.580699   38829 type.go:168] "Request Body" body=""
	I1213 18:37:45.580773   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:45.581090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:46.080825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:46.080898   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:46.081231   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:46.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:37:46.580818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:46.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:46.581235   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:47.080684   38829 type.go:168] "Request Body" body=""
	I1213 18:37:47.080759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:47.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:47.580848   38829 type.go:168] "Request Body" body=""
	I1213 18:37:47.580921   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:47.581277   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:48.080712   38829 type.go:168] "Request Body" body=""
	I1213 18:37:48.080804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:48.081135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:48.580811   38829 type.go:168] "Request Body" body=""
	I1213 18:37:48.580882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:48.581154   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:49.081058   38829 type.go:168] "Request Body" body=""
	I1213 18:37:49.081150   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:49.081477   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:49.081542   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:49.581293   38829 type.go:168] "Request Body" body=""
	I1213 18:37:49.581370   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:49.581713   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:50.081496   38829 type.go:168] "Request Body" body=""
	I1213 18:37:50.081562   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:50.081847   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:50.581629   38829 type.go:168] "Request Body" body=""
	I1213 18:37:50.581706   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:50.582071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:51.080700   38829 type.go:168] "Request Body" body=""
	I1213 18:37:51.080790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:51.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:51.580683   38829 type.go:168] "Request Body" body=""
	I1213 18:37:51.580754   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:51.581047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:51.581094   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:52.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:37:52.080787   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:52.081175   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:52.580775   38829 type.go:168] "Request Body" body=""
	I1213 18:37:52.580867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:52.581254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:52.612466   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:37:52.672905   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:52.677070   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:52.677165   38829 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 18:37:52.680309   38829 out.go:179] * Enabled addons: 
	I1213 18:37:52.684021   38829 addons.go:530] duration metric: took 1m54.470472162s for enable addons: enabled=[]
	I1213 18:37:53.081534   38829 type.go:168] "Request Body" body=""
	I1213 18:37:53.081600   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:53.081904   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:53.580635   38829 type.go:168] "Request Body" body=""
	I1213 18:37:53.580711   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:53.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:54.080643   38829 type.go:168] "Request Body" body=""
	I1213 18:37:54.080739   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:54.082029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1213 18:37:54.082091   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:54.581623   38829 type.go:168] "Request Body" body=""
	I1213 18:37:54.581698   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:54.581957   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:55.080687   38829 type.go:168] "Request Body" body=""
	I1213 18:37:55.080780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:55.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:55.580756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:55.580828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:55.581197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:56.080640   38829 type.go:168] "Request Body" body=""
	I1213 18:37:56.080714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:56.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:56.580613   38829 type.go:168] "Request Body" body=""
	I1213 18:37:56.580689   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:56.581045   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:56.581101   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:57.080597   38829 type.go:168] "Request Body" body=""
	I1213 18:37:57.080691   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:57.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:57.580930   38829 type.go:168] "Request Body" body=""
	I1213 18:37:57.581038   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:57.585714   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 18:37:58.081512   38829 type.go:168] "Request Body" body=""
	I1213 18:37:58.081591   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:58.081945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:58.580703   38829 type.go:168] "Request Body" body=""
	I1213 18:37:58.580778   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:58.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:58.581214   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:59.081515   38829 type.go:168] "Request Body" body=""
	I1213 18:37:59.081606   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:59.081931   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:59.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:59.580732   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:59.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:00.080803   38829 type.go:168] "Request Body" body=""
	I1213 18:38:00.080888   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:00.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:00.581619   38829 type.go:168] "Request Body" body=""
	I1213 18:38:00.581690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:00.582027   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:00.582084   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:01.080751   38829 type.go:168] "Request Body" body=""
	I1213 18:38:01.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:01.081194   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:01.580724   38829 type.go:168] "Request Body" body=""
	I1213 18:38:01.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:01.581152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:02.080668   38829 type.go:168] "Request Body" body=""
	I1213 18:38:02.080746   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:02.081102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:02.580776   38829 type.go:168] "Request Body" body=""
	I1213 18:38:02.580850   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:02.581187   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:03.080936   38829 type.go:168] "Request Body" body=""
	I1213 18:38:03.081031   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:03.081349   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:03.081405   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:03.580669   38829 type.go:168] "Request Body" body=""
	I1213 18:38:03.580767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:03.581056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:04.080818   38829 type.go:168] "Request Body" body=""
	I1213 18:38:04.080899   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:04.081235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:04.580930   38829 type.go:168] "Request Body" body=""
	I1213 18:38:04.581025   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:04.581369   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:05.080659   38829 type.go:168] "Request Body" body=""
	I1213 18:38:05.080743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:05.081076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:05.580757   38829 type.go:168] "Request Body" body=""
	I1213 18:38:05.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:05.581176   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:05.581227   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:06.080773   38829 type.go:168] "Request Body" body=""
	I1213 18:38:06.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:06.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:06.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:38:06.580751   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:06.581040   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:07.080776   38829 type.go:168] "Request Body" body=""
	I1213 18:38:07.080848   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:07.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:07.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:07.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:07.581160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:08.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:38:08.080849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:08.081161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:08.081226   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:08.580947   38829 type.go:168] "Request Body" body=""
	I1213 18:38:08.581044   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:08.581405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:09.081557   38829 type.go:168] "Request Body" body=""
	I1213 18:38:09.081630   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:09.081955   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:09.580701   38829 type.go:168] "Request Body" body=""
	I1213 18:38:09.580777   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:09.581100   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:10.080747   38829 type.go:168] "Request Body" body=""
	I1213 18:38:10.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:10.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:10.081288   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:10.580771   38829 type.go:168] "Request Body" body=""
	I1213 18:38:10.580886   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:10.581218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:11.080922   38829 type.go:168] "Request Body" body=""
	I1213 18:38:11.080992   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:11.081274   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:11.581973   38829 type.go:168] "Request Body" body=""
	I1213 18:38:11.582052   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:11.582377   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:12.081104   38829 type.go:168] "Request Body" body=""
	I1213 18:38:12.081179   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:12.081532   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:12.081585   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:12.581355   38829 type.go:168] "Request Body" body=""
	I1213 18:38:12.581430   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:12.581762   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:13.081529   38829 type.go:168] "Request Body" body=""
	I1213 18:38:13.081604   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:13.081921   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:13.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:38:13.580716   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:13.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:14.081616   38829 type.go:168] "Request Body" body=""
	I1213 18:38:14.081703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:14.082037   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:14.082090   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:14.580727   38829 type.go:168] "Request Body" body=""
	I1213 18:38:14.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:14.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:15.080903   38829 type.go:168] "Request Body" body=""
	I1213 18:38:15.080982   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:15.081338   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:15.581041   38829 type.go:168] "Request Body" body=""
	I1213 18:38:15.581119   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:15.581474   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:16.081265   38829 type.go:168] "Request Body" body=""
	I1213 18:38:16.081338   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:16.081665   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:16.581493   38829 type.go:168] "Request Body" body=""
	I1213 18:38:16.581589   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:16.581945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:16.581999   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:17.080642   38829 type.go:168] "Request Body" body=""
	I1213 18:38:17.080713   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:17.080986   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:17.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:38:17.580796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:17.581138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:18.080868   38829 type.go:168] "Request Body" body=""
	I1213 18:38:18.080948   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:18.081331   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:18.581194   38829 type.go:168] "Request Body" body=""
	I1213 18:38:18.581268   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:18.581529   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:19.081522   38829 type.go:168] "Request Body" body=""
	I1213 18:38:19.081598   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:19.081945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:19.082001   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:19.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:38:19.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:19.581171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:20.080873   38829 type.go:168] "Request Body" body=""
	I1213 18:38:20.080948   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:20.081259   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:20.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:38:20.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:20.581178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:21.080749   38829 type.go:168] "Request Body" body=""
	I1213 18:38:21.080849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:21.081219   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:21.580655   38829 type.go:168] "Request Body" body=""
	I1213 18:38:21.580730   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:21.581101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:21.581180   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:22.080740   38829 type.go:168] "Request Body" body=""
	I1213 18:38:22.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:22.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:22.580922   38829 type.go:168] "Request Body" body=""
	I1213 18:38:22.581020   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:22.581389   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:23.080725   38829 type.go:168] "Request Body" body=""
	I1213 18:38:23.080802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:23.081145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:23.580880   38829 type.go:168] "Request Body" body=""
	I1213 18:38:23.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:23.581338   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:23.581392   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:24.081664   38829 type.go:168] "Request Body" body=""
	I1213 18:38:24.081759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:24.082117   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:24.580825   38829 type.go:168] "Request Body" body=""
	I1213 18:38:24.580901   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:24.581233   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:25.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:25.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:25.081203   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:25.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:38:25.580807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:25.581142   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:26.080689   38829 type.go:168] "Request Body" body=""
	I1213 18:38:26.080779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:26.081103   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:26.081156   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:26.580750   38829 type.go:168] "Request Body" body=""
	I1213 18:38:26.580831   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:26.581177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:27.080736   38829 type.go:168] "Request Body" body=""
	I1213 18:38:27.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:27.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:27.580696   38829 type.go:168] "Request Body" body=""
	I1213 18:38:27.580770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:27.581094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:28.080768   38829 type.go:168] "Request Body" body=""
	I1213 18:38:28.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:28.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:28.081197   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:28.581180   38829 type.go:168] "Request Body" body=""
	I1213 18:38:28.581274   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:28.581646   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:29.080821   38829 type.go:168] "Request Body" body=""
	I1213 18:38:29.080892   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:29.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:29.580951   38829 type.go:168] "Request Body" body=""
	I1213 18:38:29.581053   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:29.581390   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:30.080799   38829 type.go:168] "Request Body" body=""
	I1213 18:38:30.080882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:30.081350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:30.081432   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:30.580706   38829 type.go:168] "Request Body" body=""
	I1213 18:38:30.580834   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:30.581124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:31.080774   38829 type.go:168] "Request Body" body=""
	I1213 18:38:31.080864   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:31.081259   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:31.580984   38829 type.go:168] "Request Body" body=""
	I1213 18:38:31.581082   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:31.581450   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:32.080667   38829 type.go:168] "Request Body" body=""
	I1213 18:38:32.080743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:32.081034   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:32.580743   38829 type.go:168] "Request Body" body=""
	I1213 18:38:32.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:32.581200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:32.581255   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:33.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:38:33.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:33.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:33.580725   38829 type.go:168] "Request Body" body=""
	I1213 18:38:33.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:33.581164   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:34.081257   38829 type.go:168] "Request Body" body=""
	I1213 18:38:34.081337   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:34.081668   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:34.581504   38829 type.go:168] "Request Body" body=""
	I1213 18:38:34.581582   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:34.581919   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:34.581974   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:35.080651   38829 type.go:168] "Request Body" body=""
	I1213 18:38:35.080731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:35.081024   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:35.580713   38829 type.go:168] "Request Body" body=""
	I1213 18:38:35.580792   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:35.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:36.080919   38829 type.go:168] "Request Body" body=""
	I1213 18:38:36.080998   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:36.081335   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:36.580681   38829 type.go:168] "Request Body" body=""
	I1213 18:38:36.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:36.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:37.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:38:37.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:37.081165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:37.081218   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:37.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:38:37.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:37.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:38.080691   38829 type.go:168] "Request Body" body=""
	I1213 18:38:38.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:38.081186   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:38.581125   38829 type.go:168] "Request Body" body=""
	I1213 18:38:38.581202   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:38.581601   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:39.081372   38829 type.go:168] "Request Body" body=""
	I1213 18:38:39.081450   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:39.081746   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:39.081795   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:39.581476   38829 type.go:168] "Request Body" body=""
	I1213 18:38:39.581574   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:39.581834   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:40.080652   38829 type.go:168] "Request Body" body=""
	I1213 18:38:40.080736   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:40.081070   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:40.580762   38829 type.go:168] "Request Body" body=""
	I1213 18:38:40.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:40.581170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:41.080790   38829 type.go:168] "Request Body" body=""
	I1213 18:38:41.080859   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:41.081138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:41.580736   38829 type.go:168] "Request Body" body=""
	I1213 18:38:41.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:41.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:41.581213   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:42.081232   38829 type.go:168] "Request Body" body=""
	I1213 18:38:42.081358   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:42.081865   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:42.580689   38829 type.go:168] "Request Body" body=""
	I1213 18:38:42.580771   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:42.581121   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:43.080823   38829 type.go:168] "Request Body" body=""
	I1213 18:38:43.080907   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:43.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:43.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:38:43.580836   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:43.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:44.081575   38829 type.go:168] "Request Body" body=""
	I1213 18:38:44.081651   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:44.081974   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:44.082018   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:44.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:38:44.580850   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:44.581196   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:45.080840   38829 type.go:168] "Request Body" body=""
	I1213 18:38:45.080920   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:45.081286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:45.580954   38829 type.go:168] "Request Body" body=""
	I1213 18:38:45.581055   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:45.581346   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:46.081059   38829 type.go:168] "Request Body" body=""
	I1213 18:38:46.081132   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:46.081421   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:46.581118   38829 type.go:168] "Request Body" body=""
	I1213 18:38:46.581200   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:46.581535   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:46.581590   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:47.081106   38829 type.go:168] "Request Body" body=""
	I1213 18:38:47.081224   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:47.081480   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:47.581264   38829 type.go:168] "Request Body" body=""
	I1213 18:38:47.581336   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:47.581677   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:48.081348   38829 type.go:168] "Request Body" body=""
	I1213 18:38:48.081420   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:48.081786   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:48.580712   38829 type.go:168] "Request Body" body=""
	I1213 18:38:48.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:48.581132   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:49.081267   38829 type.go:168] "Request Body" body=""
	I1213 18:38:49.081338   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:49.081661   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:49.081719   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:49.581307   38829 type.go:168] "Request Body" body=""
	I1213 18:38:49.581390   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:49.581723   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:50.081491   38829 type.go:168] "Request Body" body=""
	I1213 18:38:50.081558   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:50.081836   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:50.581617   38829 type.go:168] "Request Body" body=""
	I1213 18:38:50.581690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:50.582006   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:51.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:51.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:51.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:51.580635   38829 type.go:168] "Request Body" body=""
	I1213 18:38:51.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:51.581040   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:51.581092   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:52.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:52.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:52.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:52.580897   38829 type.go:168] "Request Body" body=""
	I1213 18:38:52.580975   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:52.581319   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:53.081002   38829 type.go:168] "Request Body" body=""
	I1213 18:38:53.081090   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:53.081366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:53.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:38:53.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:53.581210   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:53.581264   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:54.081117   38829 type.go:168] "Request Body" body=""
	I1213 18:38:54.081197   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:54.081547   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:54.581298   38829 type.go:168] "Request Body" body=""
	I1213 18:38:54.581371   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:54.581643   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:55.081403   38829 type.go:168] "Request Body" body=""
	I1213 18:38:55.081482   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:55.081842   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:55.581455   38829 type.go:168] "Request Body" body=""
	I1213 18:38:55.581534   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:55.581851   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:55.581906   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:56.080602   38829 type.go:168] "Request Body" body=""
	I1213 18:38:56.080680   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:56.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:56.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:56.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:56.581197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:57.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:38:57.080844   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:57.081204   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:57.580625   38829 type.go:168] "Request Body" body=""
	I1213 18:38:57.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:57.580967   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:58.080697   38829 type.go:168] "Request Body" body=""
	I1213 18:38:58.080767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:58.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:58.081121   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:58.580746   38829 type.go:168] "Request Body" body=""
	I1213 18:38:58.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:58.581193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:59.080619   38829 type.go:168] "Request Body" body=""
	I1213 18:38:59.080690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:59.080957   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:59.580697   38829 type.go:168] "Request Body" body=""
	I1213 18:38:59.580775   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:59.581075   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:00.080781   38829 type.go:168] "Request Body" body=""
	I1213 18:39:00.080864   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:00.081214   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:00.081263   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:00.580868   38829 type.go:168] "Request Body" body=""
	I1213 18:39:00.580959   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:00.581261   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:01.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:39:01.080795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:01.081160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:01.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:01.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:01.581212   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:02.080885   38829 type.go:168] "Request Body" body=""
	I1213 18:39:02.080961   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:02.081256   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:02.081306   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:02.580741   38829 type.go:168] "Request Body" body=""
	I1213 18:39:02.580818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:02.581177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:03.080736   38829 type.go:168] "Request Body" body=""
	I1213 18:39:03.080810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:03.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:03.580700   38829 type.go:168] "Request Body" body=""
	I1213 18:39:03.580773   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:03.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:04.080632   38829 type.go:168] "Request Body" body=""
	I1213 18:39:04.080714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:04.081077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:04.580778   38829 type.go:168] "Request Body" body=""
	I1213 18:39:04.580863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:04.581243   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:04.581303   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:05.080687   38829 type.go:168] "Request Body" body=""
	I1213 18:39:05.080765   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:05.081059   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:05.580796   38829 type.go:168] "Request Body" body=""
	I1213 18:39:05.580872   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:05.581215   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:06.080727   38829 type.go:168] "Request Body" body=""
	I1213 18:39:06.080803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:06.081158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:06.580837   38829 type.go:168] "Request Body" body=""
	I1213 18:39:06.580917   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:06.581202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:07.080725   38829 type.go:168] "Request Body" body=""
	I1213 18:39:07.080808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:07.081164   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:07.081214   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:07.580716   38829 type.go:168] "Request Body" body=""
	I1213 18:39:07.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:07.581129   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:08.080858   38829 type.go:168] "Request Body" body=""
	I1213 18:39:08.080931   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:08.081213   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:08.581137   38829 type.go:168] "Request Body" body=""
	I1213 18:39:08.581207   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:08.581513   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:09.081065   38829 type.go:168] "Request Body" body=""
	I1213 18:39:09.081139   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:09.081514   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:09.081581   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:09.581276   38829 type.go:168] "Request Body" body=""
	I1213 18:39:09.581342   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:09.581644   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:10.081407   38829 type.go:168] "Request Body" body=""
	I1213 18:39:10.081483   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:10.081851   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:10.581496   38829 type.go:168] "Request Body" body=""
	I1213 18:39:10.581567   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:10.581887   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:11.080629   38829 type.go:168] "Request Body" body=""
	I1213 18:39:11.080701   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:11.081001   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:11.580726   38829 type.go:168] "Request Body" body=""
	I1213 18:39:11.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:11.581121   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:11.581171   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:12.080760   38829 type.go:168] "Request Body" body=""
	I1213 18:39:12.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:12.081152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:12.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:39:12.580744   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:12.581068   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:13.080734   38829 type.go:168] "Request Body" body=""
	I1213 18:39:13.080808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:13.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:13.580863   38829 type.go:168] "Request Body" body=""
	I1213 18:39:13.580937   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:13.581281   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:13.581332   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:14.081577   38829 type.go:168] "Request Body" body=""
	I1213 18:39:14.081653   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:14.081950   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:14.580638   38829 type.go:168] "Request Body" body=""
	I1213 18:39:14.580713   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:14.581046   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:15.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:39:15.080825   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:15.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:15.580864   38829 type.go:168] "Request Body" body=""
	I1213 18:39:15.580936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:15.581210   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:16.080732   38829 type.go:168] "Request Body" body=""
	I1213 18:39:16.080807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:16.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:16.081237   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:16.580894   38829 type.go:168] "Request Body" body=""
	I1213 18:39:16.580969   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:16.581301   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:17.080988   38829 type.go:168] "Request Body" body=""
	I1213 18:39:17.081089   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:17.081420   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:17.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:39:17.580844   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:17.581202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:18.080887   38829 type.go:168] "Request Body" body=""
	I1213 18:39:18.080962   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:18.081285   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:18.081330   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:18.581099   38829 type.go:168] "Request Body" body=""
	I1213 18:39:18.581170   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:18.581423   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:19.081384   38829 type.go:168] "Request Body" body=""
	I1213 18:39:19.081453   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:19.081768   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:19.581414   38829 type.go:168] "Request Body" body=""
	I1213 18:39:19.581490   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:19.581786   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:20.081602   38829 type.go:168] "Request Body" body=""
	I1213 18:39:20.081678   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:20.081965   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:20.082018   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:20.580679   38829 type.go:168] "Request Body" body=""
	I1213 18:39:20.580788   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:20.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:21.080703   38829 type.go:168] "Request Body" body=""
	I1213 18:39:21.080796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:21.081146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:21.580784   38829 type.go:168] "Request Body" body=""
	I1213 18:39:21.580863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:21.581224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:22.080782   38829 type.go:168] "Request Body" body=""
	I1213 18:39:22.080855   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:22.081300   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:22.580762   38829 type.go:168] "Request Body" body=""
	I1213 18:39:22.580835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:22.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:22.581194   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:23.080788   38829 type.go:168] "Request Body" body=""
	I1213 18:39:23.080860   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:23.081193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:23.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:39:23.580820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:23.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:24.081435   38829 type.go:168] "Request Body" body=""
	I1213 18:39:24.081530   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:24.081884   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:24.581587   38829 type.go:168] "Request Body" body=""
	I1213 18:39:24.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:24.581912   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:24.581951   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:25.080657   38829 type.go:168] "Request Body" body=""
	I1213 18:39:25.080734   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:25.081179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:25.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:39:25.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:25.581190   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:26.080869   38829 type.go:168] "Request Body" body=""
	I1213 18:39:26.080936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:26.081224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:26.580741   38829 type.go:168] "Request Body" body=""
	I1213 18:39:26.580814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:26.581148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:27.080703   38829 type.go:168] "Request Body" body=""
	I1213 18:39:27.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:27.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:27.081165   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:27.580724   38829 type.go:168] "Request Body" body=""
	I1213 18:39:27.580797   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:27.581139   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:28.080722   38829 type.go:168] "Request Body" body=""
	I1213 18:39:28.080793   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:28.081199   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:28.580834   38829 type.go:168] "Request Body" body=""
	I1213 18:39:28.580915   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:28.581280   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:29.081285   38829 type.go:168] "Request Body" body=""
	I1213 18:39:29.081351   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:29.081628   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:29.081672   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:29.581065   38829 type.go:168] "Request Body" body=""
	I1213 18:39:29.581140   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:29.581481   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:30.081344   38829 type.go:168] "Request Body" body=""
	I1213 18:39:30.081439   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:30.081896   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:30.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:39:30.580748   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:30.581066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:31.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:39:31.080834   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:31.081162   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:31.580866   38829 type.go:168] "Request Body" body=""
	I1213 18:39:31.580942   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:31.581337   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:31.581394   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:32.080782   38829 type.go:168] "Request Body" body=""
	I1213 18:39:32.080853   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:32.081134   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:32.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:32.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:32.581200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:33.080901   38829 type.go:168] "Request Body" body=""
	I1213 18:39:33.080972   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:33.081318   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:33.580802   38829 type.go:168] "Request Body" body=""
	I1213 18:39:33.580878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:33.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:34.080872   38829 type.go:168] "Request Body" body=""
	I1213 18:39:34.080943   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:34.081303   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:34.081358   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:34.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:39:34.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:34.581136   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:35.080815   38829 type.go:168] "Request Body" body=""
	I1213 18:39:35.080883   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:35.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:35.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:39:35.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:35.581133   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:36.080735   38829 type.go:168] "Request Body" body=""
	I1213 18:39:36.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:36.081172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:36.580859   38829 type.go:168] "Request Body" body=""
	I1213 18:39:36.580941   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:36.581223   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:36.581264   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:37.080720   38829 type.go:168] "Request Body" body=""
	I1213 18:39:37.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:37.081267   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:37.580761   38829 type.go:168] "Request Body" body=""
	I1213 18:39:37.580833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:37.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:38.080809   38829 type.go:168] "Request Body" body=""
	I1213 18:39:38.080881   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:38.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:38.581160   38829 type.go:168] "Request Body" body=""
	I1213 18:39:38.581229   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:38.581546   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:38.581608   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:39.081316   38829 type.go:168] "Request Body" body=""
	I1213 18:39:39.081387   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:39.081699   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:39.581307   38829 type.go:168] "Request Body" body=""
	I1213 18:39:39.581382   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:39.581710   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:40.081503   38829 type.go:168] "Request Body" body=""
	I1213 18:39:40.081578   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:40.081882   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:40.581632   38829 type.go:168] "Request Body" body=""
	I1213 18:39:40.581730   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:40.582090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:40.582139   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:41.080640   38829 type.go:168] "Request Body" body=""
	I1213 18:39:41.080710   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:41.081046   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:41.580670   38829 type.go:168] "Request Body" body=""
	I1213 18:39:41.580748   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:41.581076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:42.080797   38829 type.go:168] "Request Body" body=""
	I1213 18:39:42.080878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:42.081282   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:42.580711   38829 type.go:168] "Request Body" body=""
	I1213 18:39:42.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:42.581132   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:43.080747   38829 type.go:168] "Request Body" body=""
	I1213 18:39:43.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:43.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:43.081283   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:43.580965   38829 type.go:168] "Request Body" body=""
	I1213 18:39:43.581057   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:43.581416   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:44.081437   38829 type.go:168] "Request Body" body=""
	I1213 18:39:44.081507   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:44.081776   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:44.581633   38829 type.go:168] "Request Body" body=""
	I1213 18:39:44.581707   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:44.582020   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:45.080770   38829 type.go:168] "Request Body" body=""
	I1213 18:39:45.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:45.081375   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:45.081434   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:45.581089   38829 type.go:168] "Request Body" body=""
	I1213 18:39:45.581158   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:45.581469   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:46.080755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:46.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:46.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:46.580794   38829 type.go:168] "Request Body" body=""
	I1213 18:39:46.580865   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:46.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:47.080689   38829 type.go:168] "Request Body" body=""
	I1213 18:39:47.080768   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:47.081094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:47.580669   38829 type.go:168] "Request Body" body=""
	I1213 18:39:47.580763   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:47.581109   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:47.581164   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:48.080848   38829 type.go:168] "Request Body" body=""
	I1213 18:39:48.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:48.081228   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:48.581237   38829 type.go:168] "Request Body" body=""
	I1213 18:39:48.581311   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:48.581637   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:49.081081   38829 type.go:168] "Request Body" body=""
	I1213 18:39:49.081164   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:49.081471   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:49.581258   38829 type.go:168] "Request Body" body=""
	I1213 18:39:49.581336   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:49.581617   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:49.581664   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:50.081346   38829 type.go:168] "Request Body" body=""
	I1213 18:39:50.081416   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:50.081693   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:50.581552   38829 type.go:168] "Request Body" body=""
	I1213 18:39:50.581621   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:50.581942   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:51.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:39:51.080806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:51.081235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:51.580885   38829 type.go:168] "Request Body" body=""
	I1213 18:39:51.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:51.581315   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:52.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:39:52.080811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:52.081193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:52.081249   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:52.580704   38829 type.go:168] "Request Body" body=""
	I1213 18:39:52.580784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:52.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:53.080692   38829 type.go:168] "Request Body" body=""
	I1213 18:39:53.080761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:53.081060   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:53.580744   38829 type.go:168] "Request Body" body=""
	I1213 18:39:53.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:53.581232   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:54.081089   38829 type.go:168] "Request Body" body=""
	I1213 18:39:54.081164   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:54.081658   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:54.081712   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:54.581346   38829 type.go:168] "Request Body" body=""
	I1213 18:39:54.581418   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:54.581673   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:55.081499   38829 type.go:168] "Request Body" body=""
	I1213 18:39:55.081596   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:55.081941   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:55.580685   38829 type.go:168] "Request Body" body=""
	I1213 18:39:55.580777   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:55.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:56.080674   38829 type.go:168] "Request Body" body=""
	I1213 18:39:56.080750   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:56.081047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:56.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:39:56.580778   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:56.581204   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:56.581262   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:57.080917   38829 type.go:168] "Request Body" body=""
	I1213 18:39:57.081002   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:57.081366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:57.580664   38829 type.go:168] "Request Body" body=""
	I1213 18:39:57.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:57.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:58.081028   38829 type.go:168] "Request Body" body=""
	I1213 18:39:58.081122   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:58.081478   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:58.581557   38829 type.go:168] "Request Body" body=""
	I1213 18:39:58.581639   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:58.582001   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:58.582075   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:59.081358   38829 type.go:168] "Request Body" body=""
	I1213 18:39:59.081453   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:59.081774   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:59.581595   38829 type.go:168] "Request Body" body=""
	I1213 18:39:59.581667   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:59.581967   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:00.080718   38829 type.go:168] "Request Body" body=""
	I1213 18:40:00.080803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:00.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:00.582760   38829 type.go:168] "Request Body" body=""
	I1213 18:40:00.582857   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:00.583187   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:00.583244   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:01.080684   38829 type.go:168] "Request Body" body=""
	I1213 18:40:01.080755   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:01.081087   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:01.580820   38829 type.go:168] "Request Body" body=""
	I1213 18:40:01.580895   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:01.581240   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:02.080921   38829 type.go:168] "Request Body" body=""
	I1213 18:40:02.080993   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:02.081270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:02.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:02.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:02.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:03.080880   38829 type.go:168] "Request Body" body=""
	I1213 18:40:03.080955   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:03.081306   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:03.081361   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:03.580996   38829 type.go:168] "Request Body" body=""
	I1213 18:40:03.581076   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:03.581335   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:04.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:04.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:04.081183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:04.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:04.580808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:04.581149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:05.080850   38829 type.go:168] "Request Body" body=""
	I1213 18:40:05.080927   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:05.081263   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:05.580963   38829 type.go:168] "Request Body" body=""
	I1213 18:40:05.581056   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:05.581401   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:05.581460   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:06.081245   38829 type.go:168] "Request Body" body=""
	I1213 18:40:06.081316   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:06.081669   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:06.581426   38829 type.go:168] "Request Body" body=""
	I1213 18:40:06.581509   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:06.581848   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:07.081645   38829 type.go:168] "Request Body" body=""
	I1213 18:40:07.081722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:07.082062   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:07.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:07.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:07.581162   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:08.080728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:08.080798   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:08.081088   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:08.081131   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:08.580917   38829 type.go:168] "Request Body" body=""
	I1213 18:40:08.580997   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:08.581369   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:09.081067   38829 type.go:168] "Request Body" body=""
	I1213 18:40:09.081141   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:09.081470   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:09.581192   38829 type.go:168] "Request Body" body=""
	I1213 18:40:09.581258   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:09.581523   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:10.081376   38829 type.go:168] "Request Body" body=""
	I1213 18:40:10.081454   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:10.081809   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:10.081865   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:10.581615   38829 type.go:168] "Request Body" body=""
	I1213 18:40:10.581696   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:10.582036   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:11.080690   38829 type.go:168] "Request Body" body=""
	I1213 18:40:11.080762   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:11.081125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:11.580814   38829 type.go:168] "Request Body" body=""
	I1213 18:40:11.580891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:11.581233   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:12.080745   38829 type.go:168] "Request Body" body=""
	I1213 18:40:12.080820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:12.081174   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:12.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:40:12.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:12.581118   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:12.581177   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:13.080870   38829 type.go:168] "Request Body" body=""
	I1213 18:40:13.080953   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:13.081298   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:13.580990   38829 type.go:168] "Request Body" body=""
	I1213 18:40:13.581130   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:13.581452   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:14.081563   38829 type.go:168] "Request Body" body=""
	I1213 18:40:14.081631   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:14.081949   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:14.580642   38829 type.go:168] "Request Body" body=""
	I1213 18:40:14.580724   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:14.581092   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:15.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:40:15.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:15.081138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:15.081197   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:15.580905   38829 type.go:168] "Request Body" body=""
	I1213 18:40:15.580977   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:15.581270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:16.080728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:16.080801   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:16.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:16.580745   38829 type.go:168] "Request Body" body=""
	I1213 18:40:16.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:16.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:17.080854   38829 type.go:168] "Request Body" body=""
	I1213 18:40:17.080925   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:17.081196   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:17.081236   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:17.580885   38829 type.go:168] "Request Body" body=""
	I1213 18:40:17.580960   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:17.581311   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:18.081048   38829 type.go:168] "Request Body" body=""
	I1213 18:40:18.081128   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:18.081456   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:18.581421   38829 type.go:168] "Request Body" body=""
	I1213 18:40:18.581495   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:18.581752   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:19.081269   38829 type.go:168] "Request Body" body=""
	I1213 18:40:19.081345   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:19.081667   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:19.081723   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:19.581465   38829 type.go:168] "Request Body" body=""
	I1213 18:40:19.581546   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:19.581834   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:20.081620   38829 type.go:168] "Request Body" body=""
	I1213 18:40:20.081707   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:20.082023   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:20.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:40:20.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:20.581185   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:21.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:40:21.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:21.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:21.580880   38829 type.go:168] "Request Body" body=""
	I1213 18:40:21.580954   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:21.581229   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:21.581273   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:22.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:40:22.080802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:22.081186   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:22.580892   38829 type.go:168] "Request Body" body=""
	I1213 18:40:22.580971   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:22.581314   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:23.080852   38829 type.go:168] "Request Body" body=""
	I1213 18:40:23.080921   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:23.081254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:23.580738   38829 type.go:168] "Request Body" body=""
	I1213 18:40:23.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:23.581213   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:24.080992   38829 type.go:168] "Request Body" body=""
	I1213 18:40:24.081086   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:24.081439   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:24.081493   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:24.581181   38829 type.go:168] "Request Body" body=""
	I1213 18:40:24.581254   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:24.581518   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:25.081519   38829 type.go:168] "Request Body" body=""
	I1213 18:40:25.081638   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:25.082066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:25.580956   38829 type.go:168] "Request Body" body=""
	I1213 18:40:25.581049   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:25.581403   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:26.081103   38829 type.go:168] "Request Body" body=""
	I1213 18:40:26.081188   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:26.081496   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:26.081544   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:26.581271   38829 type.go:168] "Request Body" body=""
	I1213 18:40:26.581346   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:26.581679   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:27.081463   38829 type.go:168] "Request Body" body=""
	I1213 18:40:27.081544   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:27.081845   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:27.581582   38829 type.go:168] "Request Body" body=""
	I1213 18:40:27.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:27.581970   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:28.080670   38829 type.go:168] "Request Body" body=""
	I1213 18:40:28.080746   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:28.081095   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:28.580759   38829 type.go:168] "Request Body" body=""
	I1213 18:40:28.580833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:28.581189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:28.581244   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:29.080966   38829 type.go:168] "Request Body" body=""
	I1213 18:40:29.081057   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:29.081325   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:29.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:29.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:29.581235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:30.080981   38829 type.go:168] "Request Body" body=""
	I1213 18:40:30.081106   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:30.081499   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:30.581288   38829 type.go:168] "Request Body" body=""
	I1213 18:40:30.581365   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:30.581686   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:30.581744   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:31.081563   38829 type.go:168] "Request Body" body=""
	I1213 18:40:31.081643   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:31.081985   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:31.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:40:31.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:31.581128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:32.080686   38829 type.go:168] "Request Body" body=""
	I1213 18:40:32.080759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:32.081089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:32.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:40:32.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:32.581153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:33.080697   38829 type.go:168] "Request Body" body=""
	I1213 18:40:33.080771   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:33.081078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:33.081125   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:33.580695   38829 type.go:168] "Request Body" body=""
	I1213 18:40:33.580776   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:33.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:34.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:40:34.080785   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:34.081116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:34.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:34.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:34.581135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:35.080858   38829 type.go:168] "Request Body" body=""
	I1213 18:40:35.080940   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:35.081258   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:35.081316   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:35.580736   38829 type.go:168] "Request Body" body=""
	I1213 18:40:35.580819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:35.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:36.080905   38829 type.go:168] "Request Body" body=""
	I1213 18:40:36.080982   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:36.081405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:36.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:40:36.580780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:36.581071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:37.080758   38829 type.go:168] "Request Body" body=""
	I1213 18:40:37.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:37.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:37.580742   38829 type.go:168] "Request Body" body=""
	I1213 18:40:37.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:37.581185   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:37.581240   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:38.080845   38829 type.go:168] "Request Body" body=""
	I1213 18:40:38.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:38.081284   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:38.580992   38829 type.go:168] "Request Body" body=""
	I1213 18:40:38.581079   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:38.581427   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:39.081037   38829 type.go:168] "Request Body" body=""
	I1213 18:40:39.081109   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:39.081425   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:39.580691   38829 type.go:168] "Request Body" body=""
	I1213 18:40:39.580779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:39.581096   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:40.080864   38829 type.go:168] "Request Body" body=""
	I1213 18:40:40.080952   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:40.081316   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:40.081370   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:40.581072   38829 type.go:168] "Request Body" body=""
	I1213 18:40:40.581147   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:40.581455   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:41.080649   38829 type.go:168] "Request Body" body=""
	I1213 18:40:41.080720   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:41.080968   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:41.580717   38829 type.go:168] "Request Body" body=""
	I1213 18:40:41.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:41.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:42.080793   38829 type.go:168] "Request Body" body=""
	I1213 18:40:42.080889   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:42.081224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:42.580774   38829 type.go:168] "Request Body" body=""
	I1213 18:40:42.580846   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:42.581129   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:42.581171   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:43.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:40:43.080889   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:43.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:43.580912   38829 type.go:168] "Request Body" body=""
	I1213 18:40:43.581022   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:43.581350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:44.081100   38829 type.go:168] "Request Body" body=""
	I1213 18:40:44.081184   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:44.081466   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:44.581295   38829 type.go:168] "Request Body" body=""
	I1213 18:40:44.581368   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:44.581680   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:44.581735   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:45.081574   38829 type.go:168] "Request Body" body=""
	I1213 18:40:45.081671   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:45.082057   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:45.580753   38829 type.go:168] "Request Body" body=""
	I1213 18:40:45.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:45.581123   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:46.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:40:46.080807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:46.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:46.580875   38829 type.go:168] "Request Body" body=""
	I1213 18:40:46.580954   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:46.581347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:47.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:40:47.080843   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:47.081169   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:47.081222   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:47.580721   38829 type.go:168] "Request Body" body=""
	I1213 18:40:47.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:47.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:48.080733   38829 type.go:168] "Request Body" body=""
	I1213 18:40:48.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:48.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:48.581574   38829 type.go:168] "Request Body" body=""
	I1213 18:40:48.581646   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:48.581923   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:49.080895   38829 type.go:168] "Request Body" body=""
	I1213 18:40:49.080969   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:49.081284   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:49.081332   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:49.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:49.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:49.581189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:50.080877   38829 type.go:168] "Request Body" body=""
	I1213 18:40:50.080951   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:50.081313   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:50.580740   38829 type.go:168] "Request Body" body=""
	I1213 18:40:50.580817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:50.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:51.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:40:51.080811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:51.081140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:51.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:51.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:51.581094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:51.581147   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:52.080738   38829 type.go:168] "Request Body" body=""
	I1213 18:40:52.080814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:52.081156   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:52.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:40:52.580781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:52.581124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:53.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:53.080737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:53.081101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:53.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:53.580737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:53.581073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:54.081075   38829 type.go:168] "Request Body" body=""
	I1213 18:40:54.081153   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:54.081490   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:54.081544   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:54.580688   38829 type.go:168] "Request Body" body=""
	I1213 18:40:54.580770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:54.581090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:55.080755   38829 type.go:168] "Request Body" body=""
	I1213 18:40:55.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:55.081218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:55.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:55.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:55.581128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:56.080828   38829 type.go:168] "Request Body" body=""
	I1213 18:40:56.080907   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:56.081254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:56.580945   38829 type.go:168] "Request Body" body=""
	I1213 18:40:56.581061   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:56.581383   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:56.581438   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:57.081145   38829 type.go:168] "Request Body" body=""
	I1213 18:40:57.081219   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:57.081499   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:57.581369   38829 type.go:168] "Request Body" body=""
	I1213 18:40:57.581461   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:57.581753   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:58.081564   38829 type.go:168] "Request Body" body=""
	I1213 18:40:58.081635   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:58.081964   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:58.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:40:58.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:58.581151   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:59.081182   38829 type.go:168] "Request Body" body=""
	I1213 18:40:59.081258   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:59.081514   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:59.081555   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:59.581349   38829 type.go:168] "Request Body" body=""
	I1213 18:40:59.581423   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:59.581720   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:00.081815   38829 type.go:168] "Request Body" body=""
	I1213 18:41:00.081903   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:00.082221   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:00.581646   38829 type.go:168] "Request Body" body=""
	I1213 18:41:00.581716   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:00.582021   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:01.080712   38829 type.go:168] "Request Body" body=""
	I1213 18:41:01.080792   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:01.081087   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:01.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:41:01.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:01.581320   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:01.581376   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:02.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:41:02.080888   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:02.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:02.580849   38829 type.go:168] "Request Body" body=""
	I1213 18:41:02.580920   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:02.581274   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:03.080853   38829 type.go:168] "Request Body" body=""
	I1213 18:41:03.080929   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:03.081297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:03.580687   38829 type.go:168] "Request Body" body=""
	I1213 18:41:03.580761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:03.581113   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:04.080818   38829 type.go:168] "Request Body" body=""
	I1213 18:41:04.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:04.081231   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:04.081279   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:04.580784   38829 type.go:168] "Request Body" body=""
	I1213 18:41:04.580861   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:04.581254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:05.080702   38829 type.go:168] "Request Body" body=""
	I1213 18:41:05.080774   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:05.081067   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:05.580726   38829 type.go:168] "Request Body" body=""
	I1213 18:41:05.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:05.581149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:06.080754   38829 type.go:168] "Request Body" body=""
	I1213 18:41:06.080824   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:06.081183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:06.580809   38829 type.go:168] "Request Body" body=""
	I1213 18:41:06.580876   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:06.581193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:06.581275   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:07.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:41:07.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:07.081155   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:07.580864   38829 type.go:168] "Request Body" body=""
	I1213 18:41:07.580935   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:07.581293   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:08.080815   38829 type.go:168] "Request Body" body=""
	I1213 18:41:08.080882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:08.081228   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:08.581184   38829 type.go:168] "Request Body" body=""
	I1213 18:41:08.581267   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:08.581600   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:08.581650   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:09.081329   38829 type.go:168] "Request Body" body=""
	I1213 18:41:09.081400   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:09.081701   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:09.581386   38829 type.go:168] "Request Body" body=""
	I1213 18:41:09.581459   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:09.581736   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:10.081624   38829 type.go:168] "Request Body" body=""
	I1213 18:41:10.081709   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:10.082054   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:10.580758   38829 type.go:168] "Request Body" body=""
	I1213 18:41:10.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:10.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:11.080690   38829 type.go:168] "Request Body" body=""
	I1213 18:41:11.080767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:11.081130   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:11.081225   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:11.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:41:11.580838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:11.581297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:12.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:41:12.081129   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:12.081449   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:12.581247   38829 type.go:168] "Request Body" body=""
	I1213 18:41:12.581315   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:12.581576   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:13.080944   38829 type.go:168] "Request Body" body=""
	I1213 18:41:13.081031   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:13.081378   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:13.081435   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:13.580973   38829 type.go:168] "Request Body" body=""
	I1213 18:41:13.581116   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:13.581497   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:14.081648   38829 type.go:168] "Request Body" body=""
	I1213 18:41:14.081731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:14.082000   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:14.580709   38829 type.go:168] "Request Body" body=""
	I1213 18:41:14.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:14.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:15.080870   38829 type.go:168] "Request Body" body=""
	I1213 18:41:15.080947   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:15.081336   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:15.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:41:15.580729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:15.581047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:15.581086   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:16.080721   38829 type.go:168] "Request Body" body=""
	I1213 18:41:16.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:16.081148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:16.580760   38829 type.go:168] "Request Body" body=""
	I1213 18:41:16.580840   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:16.581166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:17.080685   38829 type.go:168] "Request Body" body=""
	I1213 18:41:17.080772   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:17.081106   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:17.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:17.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:17.581116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:17.581162   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:18.080745   38829 type.go:168] "Request Body" body=""
	I1213 18:41:18.080820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:18.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:18.581224   38829 type.go:168] "Request Body" body=""
	I1213 18:41:18.581296   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:18.581580   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:19.081352   38829 type.go:168] "Request Body" body=""
	I1213 18:41:19.081427   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:19.081734   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:19.581454   38829 type.go:168] "Request Body" body=""
	I1213 18:41:19.581571   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:19.581908   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:19.581960   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:20.081575   38829 type.go:168] "Request Body" body=""
	I1213 18:41:20.081653   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:20.081930   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:20.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:41:20.580722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:20.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:21.080807   38829 type.go:168] "Request Body" body=""
	I1213 18:41:21.080885   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:21.081222   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:21.580675   38829 type.go:168] "Request Body" body=""
	I1213 18:41:21.580755   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:21.581125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:22.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:41:22.080789   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:22.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:22.081174   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:22.580748   38829 type.go:168] "Request Body" body=""
	I1213 18:41:22.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:22.581169   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:23.080686   38829 type.go:168] "Request Body" body=""
	I1213 18:41:23.080758   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:23.081067   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:23.580652   38829 type.go:168] "Request Body" body=""
	I1213 18:41:23.580733   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:23.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:24.081615   38829 type.go:168] "Request Body" body=""
	I1213 18:41:24.081701   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:24.082028   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:24.082086   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:24.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:41:24.580790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:24.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:25.080723   38829 type.go:168] "Request Body" body=""
	I1213 18:41:25.080800   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:25.081135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:25.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:41:25.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:25.581183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:26.080778   38829 type.go:168] "Request Body" body=""
	I1213 18:41:26.080846   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:26.081178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:26.580887   38829 type.go:168] "Request Body" body=""
	I1213 18:41:26.580963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:26.581315   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:26.581370   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:27.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:41:27.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:27.081128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:27.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:27.580741   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:27.581056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:28.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:41:28.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:28.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:28.580902   38829 type.go:168] "Request Body" body=""
	I1213 18:41:28.580974   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:28.581301   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:29.080749   38829 type.go:168] "Request Body" body=""
	I1213 18:41:29.080817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:29.081091   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:29.081132   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:29.580839   38829 type.go:168] "Request Body" body=""
	I1213 18:41:29.580981   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:29.581329   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:30.080766   38829 type.go:168] "Request Body" body=""
	I1213 18:41:30.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:30.081270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:30.580990   38829 type.go:168] "Request Body" body=""
	I1213 18:41:30.581076   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:30.581343   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:31.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:41:31.080787   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:31.081149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:31.081200   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:31.580852   38829 type.go:168] "Request Body" body=""
	I1213 18:41:31.580935   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:31.581309   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:32.080976   38829 type.go:168] "Request Body" body=""
	I1213 18:41:32.081071   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:32.081376   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:32.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:41:32.580812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:32.581179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:33.080899   38829 type.go:168] "Request Body" body=""
	I1213 18:41:33.080979   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:33.081353   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:33.081413   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:33.580694   38829 type.go:168] "Request Body" body=""
	I1213 18:41:33.580774   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:33.581069   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:34.081613   38829 type.go:168] "Request Body" body=""
	I1213 18:41:34.081689   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:34.082033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:34.580727   38829 type.go:168] "Request Body" body=""
	I1213 18:41:34.580828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:34.581146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:35.080790   38829 type.go:168] "Request Body" body=""
	I1213 18:41:35.080863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:35.081157   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:35.580696   38829 type.go:168] "Request Body" body=""
	I1213 18:41:35.580790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:35.581078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:35.581121   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:36.080756   38829 type.go:168] "Request Body" body=""
	I1213 18:41:36.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:36.081282   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:36.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:36.580739   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:36.581032   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:37.080757   38829 type.go:168] "Request Body" body=""
	I1213 18:41:37.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:37.081179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:37.580859   38829 type.go:168] "Request Body" body=""
	I1213 18:41:37.580931   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:37.581253   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:37.581299   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:38.080940   38829 type.go:168] "Request Body" body=""
	I1213 18:41:38.081033   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:38.081302   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:38.581248   38829 type.go:168] "Request Body" body=""
	I1213 18:41:38.581332   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:38.581671   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:39.081578   38829 type.go:168] "Request Body" body=""
	I1213 18:41:39.081659   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:39.081987   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:39.580653   38829 type.go:168] "Request Body" body=""
	I1213 18:41:39.580729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:39.581076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:40.080757   38829 type.go:168] "Request Body" body=""
	I1213 18:41:40.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:40.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:40.081257   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:40.580739   38829 type.go:168] "Request Body" body=""
	I1213 18:41:40.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:40.581120   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:41.080675   38829 type.go:168] "Request Body" body=""
	I1213 18:41:41.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:41.081085   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:41.580789   38829 type.go:168] "Request Body" body=""
	I1213 18:41:41.580862   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:41.581170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:42.080802   38829 type.go:168] "Request Body" body=""
	I1213 18:41:42.080877   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:42.081216   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:42.580919   38829 type.go:168] "Request Body" body=""
	I1213 18:41:42.580994   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:42.581286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:42.581339   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:43.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:41:43.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:43.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:43.580933   38829 type.go:168] "Request Body" body=""
	I1213 18:41:43.581025   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:43.581344   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:44.081112   38829 type.go:168] "Request Body" body=""
	I1213 18:41:44.081178   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:44.081445   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:44.581279   38829 type.go:168] "Request Body" body=""
	I1213 18:41:44.581350   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:44.581653   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:44.581708   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:45.081520   38829 type.go:168] "Request Body" body=""
	I1213 18:41:45.081600   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:45.081937   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:45.580652   38829 type.go:168] "Request Body" body=""
	I1213 18:41:45.580731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:45.581051   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:46.080751   38829 type.go:168] "Request Body" body=""
	I1213 18:41:46.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:46.081265   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:46.580968   38829 type.go:168] "Request Body" body=""
	I1213 18:41:46.581065   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:46.581388   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:47.080619   38829 type.go:168] "Request Body" body=""
	I1213 18:41:47.080685   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:47.080942   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:47.080980   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:47.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:47.580743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:47.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:48.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:41:48.080842   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:48.081166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:48.581104   38829 type.go:168] "Request Body" body=""
	I1213 18:41:48.581172   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:48.581434   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:49.081502   38829 type.go:168] "Request Body" body=""
	I1213 18:41:49.081574   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:49.081903   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:49.081968   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:49.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:41:49.580722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:49.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:50.080709   38829 type.go:168] "Request Body" body=""
	I1213 18:41:50.080785   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:50.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:50.580720   38829 type.go:168] "Request Body" body=""
	I1213 18:41:50.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:50.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:51.080888   38829 type.go:168] "Request Body" body=""
	I1213 18:41:51.080963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:51.081279   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:51.580674   38829 type.go:168] "Request Body" body=""
	I1213 18:41:51.580740   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:51.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:51.581128   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:52.080773   38829 type.go:168] "Request Body" body=""
	I1213 18:41:52.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:52.081249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:52.580793   38829 type.go:168] "Request Body" body=""
	I1213 18:41:52.580867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:52.581218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:53.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:41:53.080781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:53.081080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:53.580683   38829 type.go:168] "Request Body" body=""
	I1213 18:41:53.580763   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:53.581106   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:53.581159   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:54.080735   38829 type.go:168] "Request Body" body=""
	I1213 18:41:54.080815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:54.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:54.580662   38829 type.go:168] "Request Body" body=""
	I1213 18:41:54.580733   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:54.581088   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:55.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:55.080791   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:55.081154   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:55.580764   38829 type.go:168] "Request Body" body=""
	I1213 18:41:55.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:55.581137   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:55.581182   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:56.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:41:56.080790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:56.081130   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:56.580729   38829 type.go:168] "Request Body" body=""
	I1213 18:41:56.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:56.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:57.080852   38829 type.go:168] "Request Body" body=""
	I1213 18:41:57.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:57.081256   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:57.580921   38829 type.go:168] "Request Body" body=""
	I1213 18:41:57.581000   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:57.581269   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:57.581307   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:58.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:41:58.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:58.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:58.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:58.580799   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:58.581146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:59.081521   38829 type.go:168] "Request Body" body=""
	I1213 18:41:59.081580   38829 node_ready.go:38] duration metric: took 6m0.001077775s for node "functional-752103" to be "Ready" ...
	I1213 18:41:59.084666   38829 out.go:203] 
	W1213 18:41:59.087601   38829 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 18:41:59.087625   38829 out.go:285] * 
	W1213 18:41:59.089766   38829 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:41:59.092666   38829 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.945290742Z" level=info msg="Using the internal default seccomp profile"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.945420457Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.945474053Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.945524113Z" level=info msg="RDT not available in the host system"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.945586898Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.946503864Z" level=info msg="Conmon does support the --sync option"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.946586137Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.946650473Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.947460157Z" level=info msg="Conmon does support the --sync option"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.947581732Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.947794656Z" level=info msg="Updated default CNI network name to "
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.948548372Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.949209238Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.949399688Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998014506Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998049287Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998091544Z" level=info msg="Create NRI interface"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998182883Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998191835Z" level=info msg="runtime interface created"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998201903Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998208114Z" level=info msg="runtime interface starting up..."
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.99821394Z" level=info msg="starting plugins..."
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998225148Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998287072Z" level=info msg="No systemd watchdog enabled"
	Dec 13 18:35:57 functional-752103 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:42:01.319486    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:01.319995    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:01.321823    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:01.322367    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:01.323999    8622 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 18:42:01 up  1:24,  0 user,  load average: 0.23, 0.28, 0.42
	Linux functional-752103 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 18:41:59 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:41:59 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1129.
	Dec 13 18:41:59 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:41:59 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:41:59 functional-752103 kubelet[8515]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:41:59 functional-752103 kubelet[8515]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:41:59 functional-752103 kubelet[8515]: E1213 18:41:59.893205    8515 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:41:59 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:41:59 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:42:00 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1130.
	Dec 13 18:42:00 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:00 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:00 functional-752103 kubelet[8534]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:00 functional-752103 kubelet[8534]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:00 functional-752103 kubelet[8534]: E1213 18:42:00.653546    8534 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:42:00 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:42:00 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:42:01 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1131.
	Dec 13 18:42:01 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:01 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:01 functional-752103 kubelet[8627]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:01 functional-752103 kubelet[8627]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:01 functional-752103 kubelet[8627]: E1213 18:42:01.389237    8627 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:42:01 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:42:01 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103: exit status 2 (379.594028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-752103" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-752103 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-752103 get po -A: exit status 1 (58.087686ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-752103 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-752103 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-752103 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-752103
helpers_test.go:244: (dbg) docker inspect functional-752103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	        "Created": "2025-12-13T18:27:36.869398923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:27:36.933863328Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hosts",
	        "LogPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b-json.log",
	        "Name": "/functional-752103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-752103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-752103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	                "LowerDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-752103",
	                "Source": "/var/lib/docker/volumes/functional-752103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-752103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-752103",
	                "name.minikube.sigs.k8s.io": "functional-752103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625ea12887c8956887678f2408d6edd5b98f62bce458a6906f4f662a3001a53b",
	            "SandboxKey": "/var/run/docker/netns/625ea12887c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-752103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:2c:83:4a:30:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84df48e9f7dac8c6a1b67709e5eea216d99d3f16eb50b96c7f0e4a82b3193d56",
	                    "EndpointID": "e69b1f9610d40396647a2d78f0170c31b9cd8e641fc8465e742649cccee8e591",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-752103",
	                        "d72b547cdcc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103: exit status 2 (335.261603ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-752103 logs -n 25: (1.032065438s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-350101 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image load --daemon kicbase/echo-server:functional-350101 --alsologtostderr                                                             │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh            │ functional-350101 ssh sudo cat /etc/ssl/certs/46372.pem                                                                                                   │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh            │ functional-350101 ssh sudo cat /usr/share/ca-certificates/46372.pem                                                                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls                                                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh            │ functional-350101 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image save kicbase/echo-server:functional-350101 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image rm kicbase/echo-server:functional-350101 --alsologtostderr                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls                                                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ update-context │ functional-350101 update-context --alsologtostderr -v=2                                                                                                   │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls                                                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ update-context │ functional-350101 update-context --alsologtostderr -v=2                                                                                                   │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ update-context │ functional-350101 update-context --alsologtostderr -v=2                                                                                                   │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image save --daemon kicbase/echo-server:functional-350101 --alsologtostderr                                                             │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls --format yaml --alsologtostderr                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls --format short --alsologtostderr                                                                                               │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh            │ functional-350101 ssh pgrep buildkitd                                                                                                                     │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	│ image          │ functional-350101 image ls --format json --alsologtostderr                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls --format table --alsologtostderr                                                                                               │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image build -t localhost/my-image:functional-350101 testdata/build --alsologtostderr                                                    │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image          │ functional-350101 image ls                                                                                                                                │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ delete         │ -p functional-350101                                                                                                                                      │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ start          │ -p functional-752103 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	│ start          │ -p functional-752103 --alsologtostderr -v=8                                                                                                               │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:35 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:35:53
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:35:53.999245   38829 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:35:53.999434   38829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:35:53.999464   38829 out.go:374] Setting ErrFile to fd 2...
	I1213 18:35:53.999486   38829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:35:53.999778   38829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:35:54.000250   38829 out.go:368] Setting JSON to false
	I1213 18:35:54.001308   38829 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4706,"bootTime":1765646248,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:35:54.001457   38829 start.go:143] virtualization:  
	I1213 18:35:54.010388   38829 out.go:179] * [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:35:54.014157   38829 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:35:54.014353   38829 notify.go:221] Checking for updates...
	I1213 18:35:54.020075   38829 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:35:54.023186   38829 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:54.026171   38829 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:35:54.029213   38829 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:35:54.032235   38829 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:35:54.035744   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:54.035909   38829 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:35:54.059624   38829 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:35:54.059744   38829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:35:54.127464   38829 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:35:54.118134446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:35:54.127571   38829 docker.go:319] overlay module found
	I1213 18:35:54.130605   38829 out.go:179] * Using the docker driver based on existing profile
	I1213 18:35:54.133521   38829 start.go:309] selected driver: docker
	I1213 18:35:54.133548   38829 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:54.133668   38829 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:35:54.133779   38829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:35:54.194306   38829 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:35:54.184244205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:35:54.194716   38829 cni.go:84] Creating CNI manager for ""
	I1213 18:35:54.194772   38829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:35:54.194827   38829 start.go:353] cluster config:
	{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:54.197953   38829 out.go:179] * Starting "functional-752103" primary control-plane node in "functional-752103" cluster
	I1213 18:35:54.200965   38829 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:35:54.203964   38829 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:35:54.207111   38829 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:35:54.207169   38829 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 18:35:54.207189   38829 cache.go:65] Caching tarball of preloaded images
	I1213 18:35:54.207200   38829 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:35:54.207268   38829 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 18:35:54.207278   38829 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 18:35:54.207380   38829 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/config.json ...
	I1213 18:35:54.226684   38829 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 18:35:54.226707   38829 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 18:35:54.226736   38829 cache.go:243] Successfully downloaded all kic artifacts
	I1213 18:35:54.226765   38829 start.go:360] acquireMachinesLock for functional-752103: {Name:mkf4ec1d9e1836ef54983db4562aedfd1a9c51c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 18:35:54.226834   38829 start.go:364] duration metric: took 45.136µs to acquireMachinesLock for "functional-752103"
	I1213 18:35:54.226856   38829 start.go:96] Skipping create...Using existing machine configuration
	I1213 18:35:54.226865   38829 fix.go:54] fixHost starting: 
	I1213 18:35:54.227126   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:54.245088   38829 fix.go:112] recreateIfNeeded on functional-752103: state=Running err=<nil>
	W1213 18:35:54.245125   38829 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 18:35:54.248193   38829 out.go:252] * Updating the running docker "functional-752103" container ...
	I1213 18:35:54.248225   38829 machine.go:94] provisionDockerMachine start ...
	I1213 18:35:54.248302   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.265418   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.265750   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.265765   38829 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 18:35:54.412628   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:35:54.412654   38829 ubuntu.go:182] provisioning hostname "functional-752103"
	I1213 18:35:54.412716   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.431532   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.431834   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.431851   38829 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-752103 && echo "functional-752103" | sudo tee /etc/hostname
	I1213 18:35:54.592050   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:35:54.592214   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.614592   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.614908   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.614930   38829 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-752103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-752103/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-752103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 18:35:54.769516   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 18:35:54.769546   38829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 18:35:54.769572   38829 ubuntu.go:190] setting up certificates
	I1213 18:35:54.769581   38829 provision.go:84] configureAuth start
	I1213 18:35:54.769640   38829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:35:54.787462   38829 provision.go:143] copyHostCerts
	I1213 18:35:54.787509   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:35:54.787551   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 18:35:54.787563   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:35:54.787650   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 18:35:54.787740   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:35:54.787760   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 18:35:54.787765   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:35:54.787800   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 18:35:54.787845   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:35:54.787868   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 18:35:54.787877   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:35:54.787902   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 18:35:54.787955   38829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.functional-752103 san=[127.0.0.1 192.168.49.2 functional-752103 localhost minikube]
	I1213 18:35:54.878725   38829 provision.go:177] copyRemoteCerts
	I1213 18:35:54.878794   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 18:35:54.878839   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.895961   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.009601   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 18:35:55.009696   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 18:35:55.033852   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 18:35:55.033923   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 18:35:55.052749   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 18:35:55.052813   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 18:35:55.072069   38829 provision.go:87] duration metric: took 302.464055ms to configureAuth
	I1213 18:35:55.072107   38829 ubuntu.go:206] setting minikube options for container-runtime
	I1213 18:35:55.072313   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:55.072426   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.092406   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:55.092745   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:55.092771   38829 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 18:35:55.413226   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 18:35:55.413251   38829 machine.go:97] duration metric: took 1.16501875s to provisionDockerMachine
	I1213 18:35:55.413264   38829 start.go:293] postStartSetup for "functional-752103" (driver="docker")
	I1213 18:35:55.413300   38829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 18:35:55.413403   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 18:35:55.413470   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.430709   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.537093   38829 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 18:35:55.540324   38829 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 18:35:55.540345   38829 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 18:35:55.540349   38829 command_runner.go:130] > VERSION_ID="12"
	I1213 18:35:55.540354   38829 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 18:35:55.540359   38829 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 18:35:55.540363   38829 command_runner.go:130] > ID=debian
	I1213 18:35:55.540368   38829 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 18:35:55.540373   38829 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 18:35:55.540379   38829 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 18:35:55.540743   38829 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 18:35:55.540767   38829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 18:35:55.540779   38829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 18:35:55.540839   38829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 18:35:55.540926   38829 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 18:35:55.540938   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 18:35:55.541035   38829 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> hosts in /etc/test/nested/copy/4637
	I1213 18:35:55.541044   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> /etc/test/nested/copy/4637/hosts
	I1213 18:35:55.541087   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4637
	I1213 18:35:55.548955   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:35:55.566460   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts --> /etc/test/nested/copy/4637/hosts (40 bytes)
	I1213 18:35:55.584163   38829 start.go:296] duration metric: took 170.869499ms for postStartSetup
	I1213 18:35:55.584240   38829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 18:35:55.584294   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.601966   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.706486   38829 command_runner.go:130] > 11%
	I1213 18:35:55.706569   38829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 18:35:55.711597   38829 command_runner.go:130] > 174G
	I1213 18:35:55.711643   38829 fix.go:56] duration metric: took 1.484775946s for fixHost
	I1213 18:35:55.711654   38829 start.go:83] releasing machines lock for "functional-752103", held for 1.484809349s
	I1213 18:35:55.711733   38829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:35:55.731505   38829 ssh_runner.go:195] Run: cat /version.json
	I1213 18:35:55.731524   38829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 18:35:55.731557   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.731578   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.752781   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.757282   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.945606   38829 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 18:35:55.945674   38829 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 18:35:55.945816   38829 ssh_runner.go:195] Run: systemctl --version
	I1213 18:35:55.951961   38829 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 18:35:55.951999   38829 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 18:35:55.952322   38829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 18:35:55.992229   38829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 18:35:56.001527   38829 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 18:35:56.001762   38829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 18:35:56.001849   38829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 18:35:56.014010   38829 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 18:35:56.014037   38829 start.go:496] detecting cgroup driver to use...
	I1213 18:35:56.014094   38829 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 18:35:56.014182   38829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 18:35:56.030879   38829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 18:35:56.046797   38829 docker.go:218] disabling cri-docker service (if available) ...
	I1213 18:35:56.046882   38829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 18:35:56.067384   38829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 18:35:56.080815   38829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 18:35:56.192099   38829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 18:35:56.317541   38829 docker.go:234] disabling docker service ...
	I1213 18:35:56.317693   38829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 18:35:56.332696   38829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 18:35:56.345912   38829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 18:35:56.463560   38829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 18:35:56.579100   38829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 18:35:56.592582   38829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 18:35:56.605285   38829 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 18:35:56.606432   38829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 18:35:56.606495   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.615251   38829 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 18:35:56.615329   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.624699   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.633587   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.642744   38829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 18:35:56.651128   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.660108   38829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.669661   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.678839   38829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 18:35:56.685773   38829 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 18:35:56.686744   38829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 18:35:56.694432   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:56.830483   38829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 18:35:57.005048   38829 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 18:35:57.005450   38829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 18:35:57.010285   38829 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 18:35:57.010309   38829 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 18:35:57.010316   38829 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1213 18:35:57.010333   38829 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 18:35:57.010338   38829 command_runner.go:130] > Access: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010348   38829 command_runner.go:130] > Modify: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010355   38829 command_runner.go:130] > Change: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010364   38829 command_runner.go:130] >  Birth: -
	I1213 18:35:57.010406   38829 start.go:564] Will wait 60s for crictl version
	I1213 18:35:57.010459   38829 ssh_runner.go:195] Run: which crictl
	I1213 18:35:57.014231   38829 command_runner.go:130] > /usr/local/bin/crictl
	I1213 18:35:57.014339   38829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 18:35:57.039763   38829 command_runner.go:130] > Version:  0.1.0
	I1213 18:35:57.039785   38829 command_runner.go:130] > RuntimeName:  cri-o
	I1213 18:35:57.039789   38829 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 18:35:57.039795   38829 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 18:35:57.039807   38829 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 18:35:57.039886   38829 ssh_runner.go:195] Run: crio --version
	I1213 18:35:57.067200   38829 command_runner.go:130] > crio version 1.34.3
	I1213 18:35:57.067289   38829 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 18:35:57.067311   38829 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 18:35:57.067352   38829 command_runner.go:130] >    GitTreeState:   dirty
	I1213 18:35:57.067376   38829 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 18:35:57.067397   38829 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 18:35:57.067430   38829 command_runner.go:130] >    Compiler:       gc
	I1213 18:35:57.067455   38829 command_runner.go:130] >    Platform:       linux/arm64
	I1213 18:35:57.067476   38829 command_runner.go:130] >    Linkmode:       static
	I1213 18:35:57.067513   38829 command_runner.go:130] >    BuildTags:
	I1213 18:35:57.067537   38829 command_runner.go:130] >      static
	I1213 18:35:57.067557   38829 command_runner.go:130] >      netgo
	I1213 18:35:57.067592   38829 command_runner.go:130] >      osusergo
	I1213 18:35:57.067614   38829 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 18:35:57.067632   38829 command_runner.go:130] >      seccomp
	I1213 18:35:57.067651   38829 command_runner.go:130] >      apparmor
	I1213 18:35:57.067685   38829 command_runner.go:130] >      selinux
	I1213 18:35:57.067706   38829 command_runner.go:130] >    LDFlags:          unknown
	I1213 18:35:57.067726   38829 command_runner.go:130] >    SeccompEnabled:   true
	I1213 18:35:57.067760   38829 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 18:35:57.069374   38829 ssh_runner.go:195] Run: crio --version
	I1213 18:35:57.097856   38829 command_runner.go:130] > crio version 1.34.3
	I1213 18:35:57.097937   38829 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 18:35:57.097971   38829 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 18:35:57.098005   38829 command_runner.go:130] >    GitTreeState:   dirty
	I1213 18:35:57.098025   38829 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 18:35:57.098058   38829 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 18:35:57.098082   38829 command_runner.go:130] >    Compiler:       gc
	I1213 18:35:57.098103   38829 command_runner.go:130] >    Platform:       linux/arm64
	I1213 18:35:57.098156   38829 command_runner.go:130] >    Linkmode:       static
	I1213 18:35:57.098180   38829 command_runner.go:130] >    BuildTags:
	I1213 18:35:57.098200   38829 command_runner.go:130] >      static
	I1213 18:35:57.098234   38829 command_runner.go:130] >      netgo
	I1213 18:35:57.098253   38829 command_runner.go:130] >      osusergo
	I1213 18:35:57.098277   38829 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 18:35:57.098306   38829 command_runner.go:130] >      seccomp
	I1213 18:35:57.098328   38829 command_runner.go:130] >      apparmor
	I1213 18:35:57.098348   38829 command_runner.go:130] >      selinux
	I1213 18:35:57.098384   38829 command_runner.go:130] >    LDFlags:          unknown
	I1213 18:35:57.098407   38829 command_runner.go:130] >    SeccompEnabled:   true
	I1213 18:35:57.098425   38829 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 18:35:57.103998   38829 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 18:35:57.106795   38829 cli_runner.go:164] Run: docker network inspect functional-752103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:35:57.122531   38829 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 18:35:57.126557   38829 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 18:35:57.126659   38829 kubeadm.go:884] updating cluster {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 18:35:57.126789   38829 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:35:57.126855   38829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:35:57.159258   38829 command_runner.go:130] > {
	I1213 18:35:57.159281   38829 command_runner.go:130] >   "images":  [
	I1213 18:35:57.159286   38829 command_runner.go:130] >     {
	I1213 18:35:57.159295   38829 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 18:35:57.159299   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159305   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 18:35:57.159309   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159312   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159321   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 18:35:57.159333   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 18:35:57.159349   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159354   38829 command_runner.go:130] >       "size":  "111333938",
	I1213 18:35:57.159358   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159370   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159373   38829 command_runner.go:130] >     },
	I1213 18:35:57.159376   38829 command_runner.go:130] >     {
	I1213 18:35:57.159382   38829 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 18:35:57.159389   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159394   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 18:35:57.159398   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159402   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159410   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 18:35:57.159421   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 18:35:57.159425   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159429   38829 command_runner.go:130] >       "size":  "29037500",
	I1213 18:35:57.159435   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159443   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159450   38829 command_runner.go:130] >     },
	I1213 18:35:57.159453   38829 command_runner.go:130] >     {
	I1213 18:35:57.159459   38829 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 18:35:57.159466   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159471   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 18:35:57.159474   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159481   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159489   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 18:35:57.159500   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 18:35:57.159504   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159508   38829 command_runner.go:130] >       "size":  "74491780",
	I1213 18:35:57.159514   38829 command_runner.go:130] >       "username":  "nonroot",
	I1213 18:35:57.159519   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159526   38829 command_runner.go:130] >     },
	I1213 18:35:57.159529   38829 command_runner.go:130] >     {
	I1213 18:35:57.159536   38829 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 18:35:57.159548   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159554   38829 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 18:35:57.159560   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159564   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159572   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 18:35:57.159582   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 18:35:57.159586   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159596   38829 command_runner.go:130] >       "size":  "60857170",
	I1213 18:35:57.159600   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159604   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159607   38829 command_runner.go:130] >       },
	I1213 18:35:57.159618   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159626   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159629   38829 command_runner.go:130] >     },
	I1213 18:35:57.159633   38829 command_runner.go:130] >     {
	I1213 18:35:57.159646   38829 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 18:35:57.159650   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159655   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 18:35:57.159661   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159665   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159673   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 18:35:57.159684   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 18:35:57.159687   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159691   38829 command_runner.go:130] >       "size":  "84949999",
	I1213 18:35:57.159697   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159701   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159706   38829 command_runner.go:130] >       },
	I1213 18:35:57.159710   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159720   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159723   38829 command_runner.go:130] >     },
	I1213 18:35:57.159726   38829 command_runner.go:130] >     {
	I1213 18:35:57.159733   38829 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 18:35:57.159740   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159750   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 18:35:57.159756   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159762   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159771   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 18:35:57.159782   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 18:35:57.159786   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159790   38829 command_runner.go:130] >       "size":  "72170325",
	I1213 18:35:57.159794   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159800   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159804   38829 command_runner.go:130] >       },
	I1213 18:35:57.159810   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159814   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159820   38829 command_runner.go:130] >     },
	I1213 18:35:57.159823   38829 command_runner.go:130] >     {
	I1213 18:35:57.159829   38829 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 18:35:57.159836   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159841   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 18:35:57.159847   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159851   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159859   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 18:35:57.159870   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 18:35:57.159874   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159878   38829 command_runner.go:130] >       "size":  "74106775",
	I1213 18:35:57.159882   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159888   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159892   38829 command_runner.go:130] >     },
	I1213 18:35:57.159897   38829 command_runner.go:130] >     {
	I1213 18:35:57.159904   38829 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 18:35:57.159910   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159916   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 18:35:57.159926   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159934   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159942   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 18:35:57.159966   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 18:35:57.159973   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159977   38829 command_runner.go:130] >       "size":  "49822549",
	I1213 18:35:57.159981   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159985   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159991   38829 command_runner.go:130] >       },
	I1213 18:35:57.159995   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.160003   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.160008   38829 command_runner.go:130] >     },
	I1213 18:35:57.160011   38829 command_runner.go:130] >     {
	I1213 18:35:57.160017   38829 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 18:35:57.160025   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.160030   38829 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.160033   38829 command_runner.go:130] >       ],
	I1213 18:35:57.160040   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.160048   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 18:35:57.160059   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 18:35:57.160063   38829 command_runner.go:130] >       ],
	I1213 18:35:57.160067   38829 command_runner.go:130] >       "size":  "519884",
	I1213 18:35:57.160070   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.160077   38829 command_runner.go:130] >         "value":  "65535"
	I1213 18:35:57.160080   38829 command_runner.go:130] >       },
	I1213 18:35:57.160084   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.160093   38829 command_runner.go:130] >       "pinned":  true
	I1213 18:35:57.160096   38829 command_runner.go:130] >     }
	I1213 18:35:57.160101   38829 command_runner.go:130] >   ]
	I1213 18:35:57.160112   38829 command_runner.go:130] > }
	I1213 18:35:57.162388   38829 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:35:57.162414   38829 crio.go:433] Images already preloaded, skipping extraction
	I1213 18:35:57.162470   38829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:35:57.186777   38829 command_runner.go:130] > {
	I1213 18:35:57.186796   38829 command_runner.go:130] >   "images":  [
	I1213 18:35:57.186801   38829 command_runner.go:130] >     {
	I1213 18:35:57.186817   38829 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 18:35:57.186822   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186828   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 18:35:57.186832   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186836   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186846   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 18:35:57.186854   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 18:35:57.186857   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186861   38829 command_runner.go:130] >       "size":  "111333938",
	I1213 18:35:57.186865   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.186873   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.186877   38829 command_runner.go:130] >     },
	I1213 18:35:57.186880   38829 command_runner.go:130] >     {
	I1213 18:35:57.186886   38829 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 18:35:57.186890   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186895   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 18:35:57.186898   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186902   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186913   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 18:35:57.186921   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 18:35:57.186928   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186933   38829 command_runner.go:130] >       "size":  "29037500",
	I1213 18:35:57.186936   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.186942   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.186945   38829 command_runner.go:130] >     },
	I1213 18:35:57.186948   38829 command_runner.go:130] >     {
	I1213 18:35:57.186954   38829 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 18:35:57.186958   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186963   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 18:35:57.186966   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186970   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186977   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 18:35:57.186985   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 18:35:57.186992   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186996   38829 command_runner.go:130] >       "size":  "74491780",
	I1213 18:35:57.187000   38829 command_runner.go:130] >       "username":  "nonroot",
	I1213 18:35:57.187004   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187007   38829 command_runner.go:130] >     },
	I1213 18:35:57.187009   38829 command_runner.go:130] >     {
	I1213 18:35:57.187016   38829 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 18:35:57.187020   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187024   38829 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 18:35:57.187029   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187033   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187041   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 18:35:57.187050   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 18:35:57.187053   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187057   38829 command_runner.go:130] >       "size":  "60857170",
	I1213 18:35:57.187061   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187064   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187067   38829 command_runner.go:130] >       },
	I1213 18:35:57.187075   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187079   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187082   38829 command_runner.go:130] >     },
	I1213 18:35:57.187085   38829 command_runner.go:130] >     {
	I1213 18:35:57.187092   38829 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 18:35:57.187095   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187101   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 18:35:57.187104   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187108   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187115   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 18:35:57.187123   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 18:35:57.187126   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187130   38829 command_runner.go:130] >       "size":  "84949999",
	I1213 18:35:57.187134   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187137   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187146   38829 command_runner.go:130] >       },
	I1213 18:35:57.187149   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187153   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187157   38829 command_runner.go:130] >     },
	I1213 18:35:57.187159   38829 command_runner.go:130] >     {
	I1213 18:35:57.187166   38829 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 18:35:57.187170   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187175   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 18:35:57.187178   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187182   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187190   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 18:35:57.187199   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 18:35:57.187202   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187206   38829 command_runner.go:130] >       "size":  "72170325",
	I1213 18:35:57.187209   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187213   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187216   38829 command_runner.go:130] >       },
	I1213 18:35:57.187219   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187223   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187226   38829 command_runner.go:130] >     },
	I1213 18:35:57.187229   38829 command_runner.go:130] >     {
	I1213 18:35:57.187236   38829 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 18:35:57.187239   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187244   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 18:35:57.187247   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187251   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187258   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 18:35:57.187266   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 18:35:57.187269   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187273   38829 command_runner.go:130] >       "size":  "74106775",
	I1213 18:35:57.187277   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187280   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187283   38829 command_runner.go:130] >     },
	I1213 18:35:57.187291   38829 command_runner.go:130] >     {
	I1213 18:35:57.187297   38829 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 18:35:57.187300   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187306   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 18:35:57.187309   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187313   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187321   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 18:35:57.187337   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 18:35:57.187340   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187344   38829 command_runner.go:130] >       "size":  "49822549",
	I1213 18:35:57.187348   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187352   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187355   38829 command_runner.go:130] >       },
	I1213 18:35:57.187358   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187362   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187364   38829 command_runner.go:130] >     },
	I1213 18:35:57.187367   38829 command_runner.go:130] >     {
	I1213 18:35:57.187374   38829 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 18:35:57.187378   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187382   38829 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.187385   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187389   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187396   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 18:35:57.187404   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 18:35:57.187407   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187410   38829 command_runner.go:130] >       "size":  "519884",
	I1213 18:35:57.187414   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187417   38829 command_runner.go:130] >         "value":  "65535"
	I1213 18:35:57.187420   38829 command_runner.go:130] >       },
	I1213 18:35:57.187424   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187428   38829 command_runner.go:130] >       "pinned":  true
	I1213 18:35:57.187431   38829 command_runner.go:130] >     }
	I1213 18:35:57.187434   38829 command_runner.go:130] >   ]
	I1213 18:35:57.187440   38829 command_runner.go:130] > }
	I1213 18:35:57.187570   38829 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:35:57.187578   38829 cache_images.go:86] Images are preloaded, skipping loading
	I1213 18:35:57.187585   38829 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 18:35:57.187672   38829 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-752103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 18:35:57.187756   38829 ssh_runner.go:195] Run: crio config
	I1213 18:35:57.235276   38829 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 18:35:57.235304   38829 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 18:35:57.235312   38829 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 18:35:57.235316   38829 command_runner.go:130] > #
	I1213 18:35:57.235323   38829 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 18:35:57.235330   38829 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 18:35:57.235336   38829 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 18:35:57.235344   38829 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 18:35:57.235351   38829 command_runner.go:130] > # reload'.
	I1213 18:35:57.235358   38829 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 18:35:57.235367   38829 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 18:35:57.235374   38829 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 18:35:57.235386   38829 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 18:35:57.235390   38829 command_runner.go:130] > [crio]
	I1213 18:35:57.235397   38829 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 18:35:57.235406   38829 command_runner.go:130] > # containers images, in this directory.
	I1213 18:35:57.235421   38829 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 18:35:57.235432   38829 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 18:35:57.235437   38829 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 18:35:57.235445   38829 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 18:35:57.235452   38829 command_runner.go:130] > # imagestore = ""
	I1213 18:35:57.235458   38829 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 18:35:57.235468   38829 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 18:35:57.235475   38829 command_runner.go:130] > # storage_driver = "overlay"
	I1213 18:35:57.235481   38829 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 18:35:57.235491   38829 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 18:35:57.235495   38829 command_runner.go:130] > # storage_option = [
	I1213 18:35:57.235502   38829 command_runner.go:130] > # ]
	I1213 18:35:57.235511   38829 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 18:35:57.235518   38829 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 18:35:57.235533   38829 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 18:35:57.235539   38829 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 18:35:57.235547   38829 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 18:35:57.235554   38829 command_runner.go:130] > # always happen on a node reboot
	I1213 18:35:57.235660   38829 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 18:35:57.235692   38829 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 18:35:57.235700   38829 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 18:35:57.235705   38829 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 18:35:57.235710   38829 command_runner.go:130] > # version_file_persist = ""
	I1213 18:35:57.235718   38829 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 18:35:57.235727   38829 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 18:35:57.235730   38829 command_runner.go:130] > # internal_wipe = true
	I1213 18:35:57.235739   38829 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 18:35:57.235744   38829 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 18:35:57.235748   38829 command_runner.go:130] > # internal_repair = true
	I1213 18:35:57.235754   38829 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 18:35:57.235760   38829 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 18:35:57.235769   38829 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 18:35:57.235775   38829 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 18:35:57.235781   38829 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 18:35:57.235784   38829 command_runner.go:130] > [crio.api]
	I1213 18:35:57.235790   38829 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 18:35:57.235795   38829 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 18:35:57.235800   38829 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 18:35:57.235804   38829 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 18:35:57.235811   38829 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 18:35:57.235816   38829 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 18:35:57.235819   38829 command_runner.go:130] > # stream_port = "0"
	I1213 18:35:57.235824   38829 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 18:35:57.235828   38829 command_runner.go:130] > # stream_enable_tls = false
	I1213 18:35:57.235838   38829 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 18:35:57.235842   38829 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 18:35:57.235849   38829 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 18:35:57.235854   38829 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 18:35:57.235858   38829 command_runner.go:130] > # stream_tls_cert = ""
	I1213 18:35:57.235864   38829 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 18:35:57.235869   38829 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 18:35:57.235873   38829 command_runner.go:130] > # stream_tls_key = ""
	I1213 18:35:57.235880   38829 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 18:35:57.235886   38829 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 18:35:57.235892   38829 command_runner.go:130] > # automatically pick up the changes.
	I1213 18:35:57.235896   38829 command_runner.go:130] > # stream_tls_ca = ""
	I1213 18:35:57.235914   38829 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 18:35:57.235918   38829 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 18:35:57.235926   38829 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 18:35:57.235930   38829 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 18:35:57.235936   38829 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 18:35:57.235942   38829 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 18:35:57.235945   38829 command_runner.go:130] > [crio.runtime]
	I1213 18:35:57.235951   38829 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 18:35:57.235956   38829 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 18:35:57.235960   38829 command_runner.go:130] > # "nofile=1024:2048"
	I1213 18:35:57.235965   38829 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 18:35:57.235969   38829 command_runner.go:130] > # default_ulimits = [
	I1213 18:35:57.235972   38829 command_runner.go:130] > # ]
	I1213 18:35:57.235978   38829 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 18:35:57.236231   38829 command_runner.go:130] > # no_pivot = false
	I1213 18:35:57.236246   38829 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 18:35:57.236252   38829 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 18:35:57.236258   38829 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 18:35:57.236264   38829 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 18:35:57.236272   38829 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 18:35:57.236280   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 18:35:57.236292   38829 command_runner.go:130] > # conmon = ""
	I1213 18:35:57.236297   38829 command_runner.go:130] > # Cgroup setting for conmon
	I1213 18:35:57.236304   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 18:35:57.236308   38829 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 18:35:57.236314   38829 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 18:35:57.236320   38829 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 18:35:57.236335   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 18:35:57.236339   38829 command_runner.go:130] > # conmon_env = [
	I1213 18:35:57.236342   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236348   38829 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 18:35:57.236353   38829 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 18:35:57.236358   38829 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 18:35:57.236362   38829 command_runner.go:130] > # default_env = [
	I1213 18:35:57.236365   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236370   38829 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 18:35:57.236378   38829 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 18:35:57.236386   38829 command_runner.go:130] > # selinux = false
	I1213 18:35:57.236397   38829 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 18:35:57.236405   38829 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 18:35:57.236415   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236419   38829 command_runner.go:130] > # seccomp_profile = ""
	I1213 18:35:57.236425   38829 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 18:35:57.236436   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236440   38829 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 18:35:57.236447   38829 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 18:35:57.236457   38829 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 18:35:57.236464   38829 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 18:35:57.236470   38829 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 18:35:57.236477   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236482   38829 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 18:35:57.236493   38829 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 18:35:57.236497   38829 command_runner.go:130] > # the cgroup blockio controller.
	I1213 18:35:57.236501   38829 command_runner.go:130] > # blockio_config_file = ""
	I1213 18:35:57.236512   38829 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 18:35:57.236519   38829 command_runner.go:130] > # blockio parameters.
	I1213 18:35:57.236524   38829 command_runner.go:130] > # blockio_reload = false
	I1213 18:35:57.236530   38829 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 18:35:57.236538   38829 command_runner.go:130] > # irqbalance daemon.
	I1213 18:35:57.236543   38829 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 18:35:57.236550   38829 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 18:35:57.236560   38829 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 18:35:57.236567   38829 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 18:35:57.236573   38829 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 18:35:57.236579   38829 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 18:35:57.236584   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236589   38829 command_runner.go:130] > # rdt_config_file = ""
	I1213 18:35:57.236594   38829 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 18:35:57.236600   38829 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 18:35:57.236606   38829 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 18:35:57.236612   38829 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 18:35:57.236619   38829 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 18:35:57.236626   38829 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 18:35:57.236633   38829 command_runner.go:130] > # will be added.
	I1213 18:35:57.236637   38829 command_runner.go:130] > # default_capabilities = [
	I1213 18:35:57.236640   38829 command_runner.go:130] > # 	"CHOWN",
	I1213 18:35:57.236644   38829 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 18:35:57.236647   38829 command_runner.go:130] > # 	"FSETID",
	I1213 18:35:57.236650   38829 command_runner.go:130] > # 	"FOWNER",
	I1213 18:35:57.236653   38829 command_runner.go:130] > # 	"SETGID",
	I1213 18:35:57.236656   38829 command_runner.go:130] > # 	"SETUID",
	I1213 18:35:57.236674   38829 command_runner.go:130] > # 	"SETPCAP",
	I1213 18:35:57.236679   38829 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 18:35:57.236682   38829 command_runner.go:130] > # 	"KILL",
	I1213 18:35:57.236685   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236693   38829 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 18:35:57.236702   38829 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 18:35:57.236710   38829 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 18:35:57.236716   38829 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 18:35:57.236722   38829 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 18:35:57.236726   38829 command_runner.go:130] > default_sysctls = [
	I1213 18:35:57.236731   38829 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 18:35:57.236734   38829 command_runner.go:130] > ]
	I1213 18:35:57.236738   38829 command_runner.go:130] > # List of devices on the host that a
	I1213 18:35:57.236748   38829 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 18:35:57.236755   38829 command_runner.go:130] > # allowed_devices = [
	I1213 18:35:57.236758   38829 command_runner.go:130] > # 	"/dev/fuse",
	I1213 18:35:57.236762   38829 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 18:35:57.236772   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236777   38829 command_runner.go:130] > # List of additional devices. specified as
	I1213 18:35:57.236784   38829 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 18:35:57.236794   38829 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 18:35:57.236800   38829 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 18:35:57.236804   38829 command_runner.go:130] > # additional_devices = [
	I1213 18:35:57.236832   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236837   38829 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 18:35:57.236841   38829 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 18:35:57.236844   38829 command_runner.go:130] > # 	"/etc/cdi",
	I1213 18:35:57.236848   38829 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 18:35:57.236854   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236861   38829 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 18:35:57.236870   38829 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 18:35:57.236874   38829 command_runner.go:130] > # Defaults to false.
	I1213 18:35:57.236880   38829 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 18:35:57.236891   38829 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 18:35:57.236898   38829 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 18:35:57.236901   38829 command_runner.go:130] > # hooks_dir = [
	I1213 18:35:57.236908   38829 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 18:35:57.236915   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236921   38829 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 18:35:57.236931   38829 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 18:35:57.236939   38829 command_runner.go:130] > # its default mounts from the following two files:
	I1213 18:35:57.236942   38829 command_runner.go:130] > #
	I1213 18:35:57.236949   38829 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 18:35:57.236959   38829 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 18:35:57.236964   38829 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 18:35:57.236967   38829 command_runner.go:130] > #
	I1213 18:35:57.236974   38829 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 18:35:57.236984   38829 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 18:35:57.236990   38829 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 18:35:57.236996   38829 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 18:35:57.237024   38829 command_runner.go:130] > #
	I1213 18:35:57.237029   38829 command_runner.go:130] > # default_mounts_file = ""
	I1213 18:35:57.237035   38829 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 18:35:57.237044   38829 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 18:35:57.237052   38829 command_runner.go:130] > # pids_limit = -1
	I1213 18:35:57.237058   38829 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 18:35:57.237065   38829 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 18:35:57.237075   38829 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 18:35:57.237084   38829 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 18:35:57.237092   38829 command_runner.go:130] > # log_size_max = -1
	I1213 18:35:57.237099   38829 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 18:35:57.237104   38829 command_runner.go:130] > # log_to_journald = false
	I1213 18:35:57.237114   38829 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 18:35:57.237119   38829 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 18:35:57.237125   38829 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 18:35:57.237130   38829 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 18:35:57.237137   38829 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 18:35:57.237145   38829 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 18:35:57.237151   38829 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 18:35:57.237155   38829 command_runner.go:130] > # read_only = false
	I1213 18:35:57.237162   38829 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 18:35:57.237173   38829 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 18:35:57.237181   38829 command_runner.go:130] > # live configuration reload.
	I1213 18:35:57.237191   38829 command_runner.go:130] > # log_level = "info"
	I1213 18:35:57.237200   38829 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 18:35:57.237212   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.237216   38829 command_runner.go:130] > # log_filter = ""
	I1213 18:35:57.237222   38829 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 18:35:57.237228   38829 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 18:35:57.237237   38829 command_runner.go:130] > # separated by comma.
	I1213 18:35:57.237245   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237249   38829 command_runner.go:130] > # uid_mappings = ""
	I1213 18:35:57.237255   38829 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 18:35:57.237265   38829 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 18:35:57.237269   38829 command_runner.go:130] > # separated by comma.
	I1213 18:35:57.237277   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237284   38829 command_runner.go:130] > # gid_mappings = ""
	I1213 18:35:57.237290   38829 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 18:35:57.237297   38829 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 18:35:57.237311   38829 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 18:35:57.237319   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237323   38829 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 18:35:57.237329   38829 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 18:35:57.237339   38829 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 18:35:57.237345   38829 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 18:35:57.237354   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237949   38829 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 18:35:57.237966   38829 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 18:35:57.237972   38829 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 18:35:57.237979   38829 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 18:35:57.238476   38829 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 18:35:57.238490   38829 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 18:35:57.238497   38829 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 18:35:57.238503   38829 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 18:35:57.238519   38829 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 18:35:57.238932   38829 command_runner.go:130] > # drop_infra_ctr = true
	I1213 18:35:57.238947   38829 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 18:35:57.238955   38829 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 18:35:57.238963   38829 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 18:35:57.239291   38829 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 18:35:57.239306   38829 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 18:35:57.239313   38829 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 18:35:57.239319   38829 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 18:35:57.239324   38829 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 18:35:57.239634   38829 command_runner.go:130] > # shared_cpuset = ""
	I1213 18:35:57.239648   38829 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 18:35:57.239654   38829 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 18:35:57.240060   38829 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 18:35:57.240075   38829 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 18:35:57.240414   38829 command_runner.go:130] > # pinns_path = ""
	I1213 18:35:57.240427   38829 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 18:35:57.240434   38829 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 18:35:57.240846   38829 command_runner.go:130] > # enable_criu_support = true
	I1213 18:35:57.240873   38829 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 18:35:57.240881   38829 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 18:35:57.241322   38829 command_runner.go:130] > # enable_pod_events = false
	I1213 18:35:57.241336   38829 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 18:35:57.241342   38829 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 18:35:57.241756   38829 command_runner.go:130] > # default_runtime = "crun"
	I1213 18:35:57.241768   38829 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 18:35:57.241777   38829 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 18:35:57.241786   38829 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 18:35:57.241791   38829 command_runner.go:130] > # creation as a file is not desired either.
	I1213 18:35:57.241800   38829 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 18:35:57.241820   38829 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 18:35:57.242010   38829 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 18:35:57.242355   38829 command_runner.go:130] > # ]
	I1213 18:35:57.242370   38829 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 18:35:57.242386   38829 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 18:35:57.242394   38829 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 18:35:57.242400   38829 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 18:35:57.242406   38829 command_runner.go:130] > #
	I1213 18:35:57.242412   38829 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 18:35:57.242419   38829 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 18:35:57.242423   38829 command_runner.go:130] > # runtime_type = "oci"
	I1213 18:35:57.242427   38829 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 18:35:57.242434   38829 command_runner.go:130] > # inherit_default_runtime = false
	I1213 18:35:57.242441   38829 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 18:35:57.242445   38829 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 18:35:57.242449   38829 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 18:35:57.242460   38829 command_runner.go:130] > # monitor_env = []
	I1213 18:35:57.242465   38829 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 18:35:57.242470   38829 command_runner.go:130] > # allowed_annotations = []
	I1213 18:35:57.242487   38829 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 18:35:57.242491   38829 command_runner.go:130] > # no_sync_log = false
	I1213 18:35:57.242496   38829 command_runner.go:130] > # default_annotations = {}
	I1213 18:35:57.242500   38829 command_runner.go:130] > # stream_websockets = false
	I1213 18:35:57.242507   38829 command_runner.go:130] > # seccomp_profile = ""
	I1213 18:35:57.242553   38829 command_runner.go:130] > # Where:
	I1213 18:35:57.242564   38829 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 18:35:57.242570   38829 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 18:35:57.242577   38829 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 18:35:57.242583   38829 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 18:35:57.242587   38829 command_runner.go:130] > #   in $PATH.
	I1213 18:35:57.242593   38829 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 18:35:57.242598   38829 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 18:35:57.242614   38829 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 18:35:57.242620   38829 command_runner.go:130] > #   state.
	I1213 18:35:57.242626   38829 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 18:35:57.242633   38829 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 18:35:57.242641   38829 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 18:35:57.242647   38829 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 18:35:57.242652   38829 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 18:35:57.242659   38829 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 18:35:57.242665   38829 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 18:35:57.242671   38829 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 18:35:57.242684   38829 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 18:35:57.242694   38829 command_runner.go:130] > #   The currently recognized values are:
	I1213 18:35:57.242701   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 18:35:57.242709   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 18:35:57.242718   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 18:35:57.242724   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 18:35:57.242736   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 18:35:57.242745   38829 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 18:35:57.242761   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 18:35:57.242774   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 18:35:57.242781   38829 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 18:35:57.242788   38829 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 18:35:57.242795   38829 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 18:35:57.242802   38829 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 18:35:57.242813   38829 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 18:35:57.242824   38829 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 18:35:57.242842   38829 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 18:35:57.242850   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 18:35:57.242861   38829 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 18:35:57.242865   38829 command_runner.go:130] > #   deprecated option "conmon".
	I1213 18:35:57.242873   38829 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 18:35:57.242881   38829 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 18:35:57.242888   38829 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 18:35:57.242894   38829 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 18:35:57.242911   38829 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 18:35:57.242917   38829 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 18:35:57.242924   38829 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 18:35:57.242933   38829 command_runner.go:130] > #   conmon-rs by using:
	I1213 18:35:57.242941   38829 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 18:35:57.242954   38829 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 18:35:57.242962   38829 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 18:35:57.242973   38829 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 18:35:57.242978   38829 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 18:35:57.242995   38829 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 18:35:57.243003   38829 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 18:35:57.243008   38829 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 18:35:57.243017   38829 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 18:35:57.243027   38829 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 18:35:57.243033   38829 command_runner.go:130] > #   when a machine crash happens.
	I1213 18:35:57.243040   38829 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 18:35:57.243049   38829 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 18:35:57.243065   38829 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 18:35:57.243070   38829 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 18:35:57.243076   38829 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 18:35:57.243084   38829 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 18:35:57.243094   38829 command_runner.go:130] > #
	I1213 18:35:57.243099   38829 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 18:35:57.243102   38829 command_runner.go:130] > #
	I1213 18:35:57.243113   38829 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 18:35:57.243123   38829 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 18:35:57.243126   38829 command_runner.go:130] > #
	I1213 18:35:57.243139   38829 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 18:35:57.243153   38829 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 18:35:57.243157   38829 command_runner.go:130] > #
	I1213 18:35:57.243163   38829 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 18:35:57.243170   38829 command_runner.go:130] > # feature.
	I1213 18:35:57.243173   38829 command_runner.go:130] > #
	I1213 18:35:57.243179   38829 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 18:35:57.243186   38829 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 18:35:57.243196   38829 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 18:35:57.243208   38829 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 18:35:57.243219   38829 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 18:35:57.243222   38829 command_runner.go:130] > #
	I1213 18:35:57.243229   38829 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 18:35:57.243235   38829 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 18:35:57.243256   38829 command_runner.go:130] > #
	I1213 18:35:57.243267   38829 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 18:35:57.243274   38829 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 18:35:57.243283   38829 command_runner.go:130] > #
	I1213 18:35:57.243294   38829 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 18:35:57.243301   38829 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 18:35:57.243304   38829 command_runner.go:130] > # limitation.
	I1213 18:35:57.243341   38829 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 18:35:57.243623   38829 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 18:35:57.243757   38829 command_runner.go:130] > runtime_type = ""
	I1213 18:35:57.244003   38829 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 18:35:57.244255   38829 command_runner.go:130] > inherit_default_runtime = false
	I1213 18:35:57.244399   38829 command_runner.go:130] > runtime_config_path = ""
	I1213 18:35:57.244539   38829 command_runner.go:130] > container_min_memory = ""
	I1213 18:35:57.244777   38829 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 18:35:57.245055   38829 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 18:35:57.245214   38829 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 18:35:57.245448   38829 command_runner.go:130] > allowed_annotations = [
	I1213 18:35:57.245605   38829 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 18:35:57.245830   38829 command_runner.go:130] > ]
	I1213 18:35:57.246064   38829 command_runner.go:130] > privileged_without_host_devices = false
	I1213 18:35:57.246554   38829 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 18:35:57.246808   38829 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 18:35:57.246935   38829 command_runner.go:130] > runtime_type = ""
	I1213 18:35:57.247167   38829 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 18:35:57.247404   38829 command_runner.go:130] > inherit_default_runtime = false
	I1213 18:35:57.247591   38829 command_runner.go:130] > runtime_config_path = ""
	I1213 18:35:57.247761   38829 command_runner.go:130] > container_min_memory = ""
	I1213 18:35:57.248046   38829 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 18:35:57.248332   38829 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 18:35:57.248492   38829 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 18:35:57.248957   38829 command_runner.go:130] > privileged_without_host_devices = false
	I1213 18:35:57.249339   38829 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 18:35:57.249353   38829 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 18:35:57.249360   38829 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 18:35:57.249369   38829 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 18:35:57.249380   38829 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 18:35:57.249391   38829 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 18:35:57.249420   38829 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 18:35:57.249432   38829 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 18:35:57.249442   38829 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 18:35:57.249454   38829 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 18:35:57.249460   38829 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 18:35:57.249474   38829 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 18:35:57.249483   38829 command_runner.go:130] > # Example:
	I1213 18:35:57.249488   38829 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 18:35:57.249494   38829 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 18:35:57.249507   38829 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 18:35:57.249513   38829 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 18:35:57.249522   38829 command_runner.go:130] > # cpuset = "0-1"
	I1213 18:35:57.249525   38829 command_runner.go:130] > # cpushares = "5"
	I1213 18:35:57.249529   38829 command_runner.go:130] > # cpuquota = "1000"
	I1213 18:35:57.249533   38829 command_runner.go:130] > # cpuperiod = "100000"
	I1213 18:35:57.249548   38829 command_runner.go:130] > # cpulimit = "35"
	I1213 18:35:57.249556   38829 command_runner.go:130] > # Where:
	I1213 18:35:57.249560   38829 command_runner.go:130] > # The workload name is workload-type.
	I1213 18:35:57.249568   38829 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 18:35:57.249574   38829 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 18:35:57.249585   38829 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 18:35:57.249594   38829 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 18:35:57.249604   38829 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 18:35:57.249739   38829 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 18:35:57.249752   38829 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 18:35:57.249757   38829 command_runner.go:130] > # Default value is set to true
	I1213 18:35:57.250196   38829 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 18:35:57.250210   38829 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 18:35:57.250216   38829 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 18:35:57.250220   38829 command_runner.go:130] > # Default value is set to 'false'
	I1213 18:35:57.250699   38829 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 18:35:57.250712   38829 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 18:35:57.250722   38829 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 18:35:57.251071   38829 command_runner.go:130] > # timezone = ""
	I1213 18:35:57.251082   38829 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 18:35:57.251086   38829 command_runner.go:130] > #
	I1213 18:35:57.251093   38829 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 18:35:57.251100   38829 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 18:35:57.251103   38829 command_runner.go:130] > [crio.image]
	I1213 18:35:57.251109   38829 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 18:35:57.251555   38829 command_runner.go:130] > # default_transport = "docker://"
	I1213 18:35:57.251569   38829 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 18:35:57.251576   38829 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 18:35:57.251964   38829 command_runner.go:130] > # global_auth_file = ""
	I1213 18:35:57.251977   38829 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 18:35:57.251982   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.252443   38829 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.252459   38829 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 18:35:57.252468   38829 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 18:35:57.252474   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.252817   38829 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 18:35:57.252830   38829 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 18:35:57.252837   38829 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 18:35:57.252844   38829 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 18:35:57.252849   38829 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 18:35:57.253309   38829 command_runner.go:130] > # pause_command = "/pause"
	I1213 18:35:57.253323   38829 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 18:35:57.253330   38829 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 18:35:57.253336   38829 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 18:35:57.253342   38829 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 18:35:57.253349   38829 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 18:35:57.253355   38829 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 18:35:57.253590   38829 command_runner.go:130] > # pinned_images = [
	I1213 18:35:57.253600   38829 command_runner.go:130] > # ]
	I1213 18:35:57.253607   38829 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 18:35:57.253614   38829 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 18:35:57.253621   38829 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 18:35:57.253627   38829 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 18:35:57.253636   38829 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 18:35:57.253910   38829 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 18:35:57.253925   38829 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 18:35:57.253939   38829 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 18:35:57.253949   38829 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 18:35:57.253960   38829 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 18:35:57.253967   38829 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 18:35:57.253980   38829 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 18:35:57.253986   38829 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 18:35:57.253995   38829 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 18:35:57.254000   38829 command_runner.go:130] > # changing them here.
	I1213 18:35:57.254012   38829 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 18:35:57.254016   38829 command_runner.go:130] > # insecure_registries = [
	I1213 18:35:57.254268   38829 command_runner.go:130] > # ]
	I1213 18:35:57.254281   38829 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 18:35:57.254287   38829 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 18:35:57.254424   38829 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 18:35:57.254436   38829 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 18:35:57.254580   38829 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 18:35:57.254592   38829 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 18:35:57.254600   38829 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 18:35:57.254897   38829 command_runner.go:130] > # auto_reload_registries = false
	I1213 18:35:57.254910   38829 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 18:35:57.254920   38829 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 18:35:57.254926   38829 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 18:35:57.254930   38829 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 18:35:57.254935   38829 command_runner.go:130] > # The mode of short name resolution.
	I1213 18:35:57.254941   38829 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 18:35:57.254949   38829 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 18:35:57.254965   38829 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 18:35:57.254970   38829 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 18:35:57.254982   38829 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 18:35:57.254988   38829 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 18:35:57.255234   38829 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 18:35:57.255247   38829 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 18:35:57.255251   38829 command_runner.go:130] > # CNI plugins.
	I1213 18:35:57.255254   38829 command_runner.go:130] > [crio.network]
	I1213 18:35:57.255260   38829 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 18:35:57.255266   38829 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 18:35:57.255275   38829 command_runner.go:130] > # cni_default_network = ""
	I1213 18:35:57.255283   38829 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 18:35:57.255416   38829 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 18:35:57.255429   38829 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 18:35:57.255573   38829 command_runner.go:130] > # plugin_dirs = [
	I1213 18:35:57.255807   38829 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 18:35:57.255816   38829 command_runner.go:130] > # ]
	I1213 18:35:57.255821   38829 command_runner.go:130] > # List of included pod metrics.
	I1213 18:35:57.255825   38829 command_runner.go:130] > # included_pod_metrics = [
	I1213 18:35:57.255828   38829 command_runner.go:130] > # ]
	I1213 18:35:57.255834   38829 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 18:35:57.255838   38829 command_runner.go:130] > [crio.metrics]
	I1213 18:35:57.255843   38829 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 18:35:57.255847   38829 command_runner.go:130] > # enable_metrics = false
	I1213 18:35:57.255851   38829 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 18:35:57.255867   38829 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 18:35:57.255879   38829 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 18:35:57.255889   38829 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 18:35:57.255900   38829 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 18:35:57.255905   38829 command_runner.go:130] > # metrics_collectors = [
	I1213 18:35:57.256016   38829 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 18:35:57.256027   38829 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 18:35:57.256031   38829 command_runner.go:130] > # 	"containers_oom_total",
	I1213 18:35:57.256331   38829 command_runner.go:130] > # 	"processes_defunct",
	I1213 18:35:57.256341   38829 command_runner.go:130] > # 	"operations_total",
	I1213 18:35:57.256346   38829 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 18:35:57.256351   38829 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 18:35:57.256361   38829 command_runner.go:130] > # 	"operations_errors_total",
	I1213 18:35:57.256365   38829 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 18:35:57.256370   38829 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 18:35:57.256374   38829 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 18:35:57.257117   38829 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 18:35:57.257132   38829 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 18:35:57.257137   38829 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 18:35:57.257143   38829 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 18:35:57.257155   38829 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 18:35:57.257161   38829 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 18:35:57.257170   38829 command_runner.go:130] > # ]
	I1213 18:35:57.257177   38829 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 18:35:57.257185   38829 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 18:35:57.257191   38829 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 18:35:57.257199   38829 command_runner.go:130] > # metrics_port = 9090
	I1213 18:35:57.257204   38829 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 18:35:57.257212   38829 command_runner.go:130] > # metrics_socket = ""
	I1213 18:35:57.257233   38829 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 18:35:57.257245   38829 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 18:35:57.257252   38829 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 18:35:57.257260   38829 command_runner.go:130] > # certificate on any modification event.
	I1213 18:35:57.257270   38829 command_runner.go:130] > # metrics_cert = ""
	I1213 18:35:57.257276   38829 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 18:35:57.257285   38829 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 18:35:57.257289   38829 command_runner.go:130] > # metrics_key = ""
	I1213 18:35:57.257299   38829 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 18:35:57.257318   38829 command_runner.go:130] > [crio.tracing]
	I1213 18:35:57.257325   38829 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 18:35:57.257329   38829 command_runner.go:130] > # enable_tracing = false
	I1213 18:35:57.257339   38829 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 18:35:57.257343   38829 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 18:35:57.257354   38829 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 18:35:57.257366   38829 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 18:35:57.257381   38829 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 18:35:57.257393   38829 command_runner.go:130] > [crio.nri]
	I1213 18:35:57.257402   38829 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 18:35:57.257406   38829 command_runner.go:130] > # enable_nri = true
	I1213 18:35:57.257410   38829 command_runner.go:130] > # NRI socket to listen on.
	I1213 18:35:57.257415   38829 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 18:35:57.257423   38829 command_runner.go:130] > # NRI plugin directory to use.
	I1213 18:35:57.257428   38829 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 18:35:57.257437   38829 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 18:35:57.257442   38829 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 18:35:57.257457   38829 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 18:35:57.257514   38829 command_runner.go:130] > # nri_disable_connections = false
	I1213 18:35:57.257530   38829 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 18:35:57.257535   38829 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 18:35:57.257544   38829 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 18:35:57.257549   38829 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 18:35:57.257558   38829 command_runner.go:130] > # NRI default validator configuration.
	I1213 18:35:57.257566   38829 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 18:35:57.257576   38829 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 18:35:57.257584   38829 command_runner.go:130] > # can be restricted/rejected:
	I1213 18:35:57.257588   38829 command_runner.go:130] > # - OCI hook injection
	I1213 18:35:57.257597   38829 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 18:35:57.257609   38829 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 18:35:57.257615   38829 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 18:35:57.257624   38829 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 18:35:57.257632   38829 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 18:35:57.257642   38829 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 18:35:57.257652   38829 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 18:35:57.257660   38829 command_runner.go:130] > #
	I1213 18:35:57.257664   38829 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 18:35:57.257672   38829 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 18:35:57.257686   38829 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 18:35:57.257692   38829 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 18:35:57.257699   38829 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 18:35:57.257712   38829 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 18:35:57.257721   38829 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 18:35:57.257726   38829 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 18:35:57.257732   38829 command_runner.go:130] > # ]
	I1213 18:35:57.257738   38829 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 18:35:57.257747   38829 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 18:35:57.257763   38829 command_runner.go:130] > [crio.stats]
	I1213 18:35:57.257772   38829 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 18:35:57.257778   38829 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 18:35:57.257782   38829 command_runner.go:130] > # stats_collection_period = 0
	I1213 18:35:57.257792   38829 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 18:35:57.257800   38829 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 18:35:57.257809   38829 command_runner.go:130] > # collection_period = 0
	I1213 18:35:57.259571   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.21464252Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 18:35:57.259589   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214677794Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 18:35:57.259613   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214706635Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 18:35:57.259625   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.21473084Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 18:35:57.259635   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214801782Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:57.259643   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.215251382Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 18:35:57.259658   38829 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 18:35:57.259749   38829 cni.go:84] Creating CNI manager for ""
	I1213 18:35:57.259765   38829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:35:57.259800   38829 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 18:35:57.259831   38829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-752103 NodeName:functional-752103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 18:35:57.259972   38829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-752103"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 18:35:57.260053   38829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 18:35:57.267743   38829 command_runner.go:130] > kubeadm
	I1213 18:35:57.267764   38829 command_runner.go:130] > kubectl
	I1213 18:35:57.267769   38829 command_runner.go:130] > kubelet
	I1213 18:35:57.268114   38829 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 18:35:57.268211   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 18:35:57.275739   38829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 18:35:57.288967   38829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 18:35:57.301790   38829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 18:35:57.314673   38829 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 18:35:57.318486   38829 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 18:35:57.318580   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:57.437137   38829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:35:57.456752   38829 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103 for IP: 192.168.49.2
	I1213 18:35:57.456776   38829 certs.go:195] generating shared ca certs ...
	I1213 18:35:57.456809   38829 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:57.456950   38829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 18:35:57.457003   38829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 18:35:57.457091   38829 certs.go:257] generating profile certs ...
	I1213 18:35:57.457200   38829 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key
	I1213 18:35:57.457253   38829 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key.597c6026
	I1213 18:35:57.457304   38829 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key
	I1213 18:35:57.457312   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 18:35:57.457324   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 18:35:57.457340   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 18:35:57.457356   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 18:35:57.457367   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 18:35:57.457383   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 18:35:57.457395   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 18:35:57.457405   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 18:35:57.457457   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 18:35:57.457490   38829 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 18:35:57.457499   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 18:35:57.457529   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 18:35:57.457562   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 18:35:57.457593   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 18:35:57.457644   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:35:57.457676   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.457691   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.457705   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.458319   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 18:35:57.479443   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 18:35:57.498974   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 18:35:57.520210   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 18:35:57.540966   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 18:35:57.558774   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 18:35:57.576442   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 18:35:57.593767   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 18:35:57.611061   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 18:35:57.628952   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 18:35:57.646627   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 18:35:57.664290   38829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 18:35:57.677693   38829 ssh_runner.go:195] Run: openssl version
	I1213 18:35:57.683465   38829 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 18:35:57.683918   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.691710   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 18:35:57.699237   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.702943   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.702972   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.703038   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.743436   38829 command_runner.go:130] > 51391683
	I1213 18:35:57.743914   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 18:35:57.751320   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.758498   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 18:35:57.765907   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769321   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769343   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769391   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.809666   38829 command_runner.go:130] > 3ec20f2e
	I1213 18:35:57.810146   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 18:35:57.818335   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.826660   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 18:35:57.834746   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838666   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838764   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838851   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.879619   38829 command_runner.go:130] > b5213941
	I1213 18:35:57.880088   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 18:35:57.887654   38829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:35:57.891412   38829 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:35:57.891437   38829 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 18:35:57.891445   38829 command_runner.go:130] > Device: 259,1	Inode: 1056084     Links: 1
	I1213 18:35:57.891452   38829 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 18:35:57.891459   38829 command_runner.go:130] > Access: 2025-12-13 18:31:50.964784337 +0000
	I1213 18:35:57.891465   38829 command_runner.go:130] > Modify: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891470   38829 command_runner.go:130] > Change: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891475   38829 command_runner.go:130] >  Birth: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891539   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 18:35:57.937033   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:57.937482   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 18:35:57.978137   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:57.978564   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 18:35:58.033951   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.034441   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 18:35:58.075936   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.076412   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 18:35:58.118212   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.118338   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 18:35:58.159347   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.159444   38829 kubeadm.go:401] StartCluster: {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:58.159559   38829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:35:58.159642   38829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:35:58.186428   38829 cri.go:89] found id: ""
	I1213 18:35:58.186502   38829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 18:35:58.193645   38829 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 18:35:58.193670   38829 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 18:35:58.193678   38829 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 18:35:58.194604   38829 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 18:35:58.194674   38829 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 18:35:58.194749   38829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 18:35:58.202237   38829 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:35:58.202735   38829 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-752103" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.202850   38829 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-2686/kubeconfig needs updating (will repair): [kubeconfig missing "functional-752103" cluster setting kubeconfig missing "functional-752103" context setting]
	I1213 18:35:58.203123   38829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.203546   38829 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.203705   38829 kapi.go:59] client config for functional-752103: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 18:35:58.204223   38829 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 18:35:58.204247   38829 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 18:35:58.204258   38829 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 18:35:58.204263   38829 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 18:35:58.204267   38829 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 18:35:58.204300   38829 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 18:35:58.204536   38829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 18:35:58.212005   38829 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 18:35:58.212037   38829 kubeadm.go:602] duration metric: took 17.346627ms to restartPrimaryControlPlane
	I1213 18:35:58.212045   38829 kubeadm.go:403] duration metric: took 52.608163ms to StartCluster
	I1213 18:35:58.212060   38829 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.212116   38829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.212712   38829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.212903   38829 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 18:35:58.213488   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:58.213543   38829 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 18:35:58.213607   38829 addons.go:70] Setting storage-provisioner=true in profile "functional-752103"
	I1213 18:35:58.213620   38829 addons.go:239] Setting addon storage-provisioner=true in "functional-752103"
	I1213 18:35:58.213643   38829 host.go:66] Checking if "functional-752103" exists ...
	I1213 18:35:58.214229   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.214390   38829 addons.go:70] Setting default-storageclass=true in profile "functional-752103"
	I1213 18:35:58.214412   38829 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-752103"
	I1213 18:35:58.214713   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.219256   38829 out.go:179] * Verifying Kubernetes components...
	I1213 18:35:58.222143   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:58.244199   38829 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 18:35:58.247016   38829 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:58.247042   38829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 18:35:58.247112   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:58.257520   38829 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.257687   38829 kapi.go:59] client config for functional-752103: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 18:35:58.257971   38829 addons.go:239] Setting addon default-storageclass=true in "functional-752103"
	I1213 18:35:58.258004   38829 host.go:66] Checking if "functional-752103" exists ...
	I1213 18:35:58.258425   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.277237   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:58.306835   38829 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:58.306855   38829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 18:35:58.306918   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:58.340724   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:58.416694   38829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:35:58.451165   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:58.493354   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.080268   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.080307   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080337   38829 retry.go:31] will retry after 153.209012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080385   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.080398   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080404   38829 retry.go:31] will retry after 291.62792ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080464   38829 node_ready.go:35] waiting up to 6m0s for node "functional-752103" to be "Ready" ...
	I1213 18:35:59.080578   38829 type.go:168] "Request Body" body=""
	I1213 18:35:59.080656   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:35:59.080963   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:35:59.234362   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:59.300149   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.300200   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.300219   38829 retry.go:31] will retry after 511.331502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.372301   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.426538   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.430102   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.430132   38829 retry.go:31] will retry after 426.700032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.581486   38829 type.go:168] "Request Body" body=""
	I1213 18:35:59.581586   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:35:59.581963   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:35:59.812414   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:59.857973   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.893611   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.893688   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.893723   38829 retry.go:31] will retry after 310.068383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.947559   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.947617   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.947640   38829 retry.go:31] will retry after 829.65637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.080795   38829 type.go:168] "Request Body" body=""
	I1213 18:36:00.080875   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:00.081240   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:00.205923   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:00.416702   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:00.416818   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.416873   38829 retry.go:31] will retry after 579.133816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.581369   38829 type.go:168] "Request Body" body=""
	I1213 18:36:00.581557   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:00.582010   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:00.778452   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:00.837536   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:00.837585   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.837604   38829 retry.go:31] will retry after 974.075863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.996954   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:01.059672   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:01.059714   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.059763   38829 retry.go:31] will retry after 1.136000803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.080856   38829 type.go:168] "Request Body" body=""
	I1213 18:36:01.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:01.081261   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:01.081306   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:01.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:36:01.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:01.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:01.812632   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:01.883701   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:01.883803   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.883825   38829 retry.go:31] will retry after 921.808005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.081109   38829 type.go:168] "Request Body" body=""
	I1213 18:36:02.081198   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:02.081477   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:02.196877   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:02.253907   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:02.257605   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.257637   38829 retry.go:31] will retry after 1.546462752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.581141   38829 type.go:168] "Request Body" body=""
	I1213 18:36:02.581286   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:02.581677   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:02.805901   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:02.889297   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:02.893182   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.893216   38829 retry.go:31] will retry after 1.247577285s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:03.081687   38829 type.go:168] "Request Body" body=""
	I1213 18:36:03.081764   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:03.082108   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:03.082162   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:03.580643   38829 type.go:168] "Request Body" body=""
	I1213 18:36:03.580714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:03.580995   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:03.804445   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:03.865304   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:03.865353   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:03.865372   38829 retry.go:31] will retry after 3.450909707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.080758   38829 type.go:168] "Request Body" body=""
	I1213 18:36:04.080837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:04.081202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:04.141517   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:04.204625   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:04.204670   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.204689   38829 retry.go:31] will retry after 3.409599879s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.581166   38829 type.go:168] "Request Body" body=""
	I1213 18:36:04.581250   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:04.581566   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:05.081373   38829 type.go:168] "Request Body" body=""
	I1213 18:36:05.081443   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:05.081739   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:05.581581   38829 type.go:168] "Request Body" body=""
	I1213 18:36:05.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:05.581992   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:05.582049   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:06.080707   38829 type.go:168] "Request Body" body=""
	I1213 18:36:06.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:06.081099   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:06.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:36:06.580849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:06.581220   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:36:07.080806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:07.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.316533   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:07.393411   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:07.397246   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.397278   38829 retry.go:31] will retry after 2.442447522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.581582   38829 type.go:168] "Request Body" body=""
	I1213 18:36:07.581660   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:07.582007   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.615412   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:07.670357   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:07.674453   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.674491   38829 retry.go:31] will retry after 4.254133001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:08.080696   38829 type.go:168] "Request Body" body=""
	I1213 18:36:08.080805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:08.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:08.081221   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:08.581149   38829 type.go:168] "Request Body" body=""
	I1213 18:36:08.581249   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:08.581593   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.081583   38829 type.go:168] "Request Body" body=""
	I1213 18:36:09.081656   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:09.081980   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.581654   38829 type.go:168] "Request Body" body=""
	I1213 18:36:09.581729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:09.582054   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.840484   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:09.900307   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:09.900343   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:09.900361   38829 retry.go:31] will retry after 4.640117862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:10.081715   38829 type.go:168] "Request Body" body=""
	I1213 18:36:10.081794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:10.082116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:10.082183   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:10.580872   38829 type.go:168] "Request Body" body=""
	I1213 18:36:10.580959   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:10.581373   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.080692   38829 type.go:168] "Request Body" body=""
	I1213 18:36:11.080776   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:11.081115   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.580824   38829 type.go:168] "Request Body" body=""
	I1213 18:36:11.580896   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:11.581249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.928812   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:11.987432   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:11.987481   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:11.987500   38829 retry.go:31] will retry after 8.251825899s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:12.081733   38829 type.go:168] "Request Body" body=""
	I1213 18:36:12.081819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:12.082391   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:12.082470   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:12.580663   38829 type.go:168] "Request Body" body=""
	I1213 18:36:12.580742   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:12.581100   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:13.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:36:13.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:13.081119   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:13.580828   38829 type.go:168] "Request Body" body=""
	I1213 18:36:13.580900   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:13.581257   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:14.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:36:14.081075   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:14.081364   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:14.540746   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:14.581321   38829 type.go:168] "Request Body" body=""
	I1213 18:36:14.581395   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:14.581672   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:14.581722   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:14.600534   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:14.600587   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:14.600605   38829 retry.go:31] will retry after 8.957681085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:15.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:36:15.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:15.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:15.580789   38829 type.go:168] "Request Body" body=""
	I1213 18:36:15.580868   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:15.581235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:16.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:36:16.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:16.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:16.580886   38829 type.go:168] "Request Body" body=""
	I1213 18:36:16.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:16.581330   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:17.081614   38829 type.go:168] "Request Body" body=""
	I1213 18:36:17.081684   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:17.081955   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:17.081995   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:17.580662   38829 type.go:168] "Request Body" body=""
	I1213 18:36:17.580732   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:17.581063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:18.080650   38829 type.go:168] "Request Body" body=""
	I1213 18:36:18.080721   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:18.081108   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:18.580672   38829 type.go:168] "Request Body" body=""
	I1213 18:36:18.580742   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:18.581079   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:19.081047   38829 type.go:168] "Request Body" body=""
	I1213 18:36:19.081115   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:19.081424   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:19.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:36:19.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:19.581191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:19.581284   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:20.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:36:20.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:20.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:20.239601   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:20.301361   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:20.301401   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:20.301420   38829 retry.go:31] will retry after 6.59814029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:20.580747   38829 type.go:168] "Request Body" body=""
	I1213 18:36:20.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:20.581125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:21.080844   38829 type.go:168] "Request Body" body=""
	I1213 18:36:21.080933   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:21.081353   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:21.580686   38829 type.go:168] "Request Body" body=""
	I1213 18:36:21.580762   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:21.581080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:22.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:36:22.080884   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:22.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:22.081274   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:22.580705   38829 type.go:168] "Request Body" body=""
	I1213 18:36:22.580799   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:22.581136   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.080675   38829 type.go:168] "Request Body" body=""
	I1213 18:36:23.080747   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:23.081137   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.558605   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:23.581258   38829 type.go:168] "Request Body" body=""
	I1213 18:36:23.581331   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:23.581605   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.617607   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:23.617653   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:23.617671   38829 retry.go:31] will retry after 14.669686806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:24.081419   38829 type.go:168] "Request Body" body=""
	I1213 18:36:24.081508   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:24.081878   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:24.081930   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:24.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:36:24.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:24.581024   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:25.080794   38829 type.go:168] "Request Body" body=""
	I1213 18:36:25.080880   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:25.081347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:25.580742   38829 type.go:168] "Request Body" body=""
	I1213 18:36:25.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:25.581207   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:26.080781   38829 type.go:168] "Request Body" body=""
	I1213 18:36:26.080854   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:26.081166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:26.580764   38829 type.go:168] "Request Body" body=""
	I1213 18:36:26.580862   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:26.581247   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:26.581300   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:26.900727   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:26.960607   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:26.960668   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:26.960687   38829 retry.go:31] will retry after 15.397640826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:27.080883   38829 type.go:168] "Request Body" body=""
	I1213 18:36:27.080957   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:27.081297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:27.580637   38829 type.go:168] "Request Body" body=""
	I1213 18:36:27.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:27.580956   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:28.080641   38829 type.go:168] "Request Body" body=""
	I1213 18:36:28.080752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:28.081081   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:28.580963   38829 type.go:168] "Request Body" body=""
	I1213 18:36:28.581049   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:28.581366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:28.581418   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:29.081265   38829 type.go:168] "Request Body" body=""
	I1213 18:36:29.081330   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:29.081585   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:29.581341   38829 type.go:168] "Request Body" body=""
	I1213 18:36:29.581414   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:29.581724   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:30.083283   38829 type.go:168] "Request Body" body=""
	I1213 18:36:30.083370   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:30.083708   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:30.581559   38829 type.go:168] "Request Body" body=""
	I1213 18:36:30.581633   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:30.581902   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:30.581946   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:31.081665   38829 type.go:168] "Request Body" body=""
	I1213 18:36:31.081736   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:31.082102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:31.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:36:31.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:31.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:32.080588   38829 type.go:168] "Request Body" body=""
	I1213 18:36:32.080654   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:32.080909   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:32.581657   38829 type.go:168] "Request Body" body=""
	I1213 18:36:32.581734   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:32.582056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:32.582116   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:33.080787   38829 type.go:168] "Request Body" body=""
	I1213 18:36:33.080867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:33.081206   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:33.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:36:33.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:33.580998   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:34.080961   38829 type.go:168] "Request Body" body=""
	I1213 18:36:34.081065   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:34.081433   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:34.581228   38829 type.go:168] "Request Body" body=""
	I1213 18:36:34.581300   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:34.581636   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:35.081408   38829 type.go:168] "Request Body" body=""
	I1213 18:36:35.081478   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:35.081747   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:35.081790   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:35.581492   38829 type.go:168] "Request Body" body=""
	I1213 18:36:35.581568   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:35.581859   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:36.081553   38829 type.go:168] "Request Body" body=""
	I1213 18:36:36.081623   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:36.081928   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:36.581632   38829 type.go:168] "Request Body" body=""
	I1213 18:36:36.581711   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:36.582018   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:37.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:36:37.080804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:37.081189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:37.580917   38829 type.go:168] "Request Body" body=""
	I1213 18:36:37.580993   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:37.581352   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:37.581446   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:38.080688   38829 type.go:168] "Request Body" body=""
	I1213 18:36:38.080770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:38.081101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:38.287495   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:38.357240   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:38.360822   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:38.360853   38829 retry.go:31] will retry after 30.28485436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:38.581302   38829 type.go:168] "Request Body" body=""
	I1213 18:36:38.581374   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:38.581695   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:39.081218   38829 type.go:168] "Request Body" body=""
	I1213 18:36:39.081295   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:39.081664   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:39.581465   38829 type.go:168] "Request Body" body=""
	I1213 18:36:39.581533   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:39.581794   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:39.581852   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:40.081640   38829 type.go:168] "Request Body" body=""
	I1213 18:36:40.081724   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:40.082071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:40.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:36:40.580788   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:40.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:41.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:36:41.080801   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:41.081086   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:41.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:36:41.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:41.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:42.080831   38829 type.go:168] "Request Body" body=""
	I1213 18:36:42.080909   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:42.081302   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:42.081363   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:42.358603   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:42.430743   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:42.430803   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:42.430822   38829 retry.go:31] will retry after 12.093455046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:42.581106   38829 type.go:168] "Request Body" body=""
	I1213 18:36:42.581178   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:42.581444   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:43.081272   38829 type.go:168] "Request Body" body=""
	I1213 18:36:43.081354   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:43.081648   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:43.580658   38829 type.go:168] "Request Body" body=""
	I1213 18:36:43.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:43.581055   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:44.080685   38829 type.go:168] "Request Body" body=""
	I1213 18:36:44.080795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:44.081152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:44.580685   38829 type.go:168] "Request Body" body=""
	I1213 18:36:44.580759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:44.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:44.581161   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:45.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:36:45.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:45.081226   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:45.581071   38829 type.go:168] "Request Body" body=""
	I1213 18:36:45.581137   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:45.581415   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:46.081136   38829 type.go:168] "Request Body" body=""
	I1213 18:36:46.081217   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:46.081567   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:46.581397   38829 type.go:168] "Request Body" body=""
	I1213 18:36:46.581468   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:46.581797   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:46.581852   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:47.081586   38829 type.go:168] "Request Body" body=""
	I1213 18:36:47.081660   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:47.081917   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:47.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:36:47.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:47.581109   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:48.080824   38829 type.go:168] "Request Body" body=""
	I1213 18:36:48.080903   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:48.081209   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:48.581175   38829 type.go:168] "Request Body" body=""
	I1213 18:36:48.581241   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:48.581504   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:49.081596   38829 type.go:168] "Request Body" body=""
	I1213 18:36:49.081669   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:49.082029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:49.082084   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:49.580622   38829 type.go:168] "Request Body" body=""
	I1213 18:36:49.580704   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:49.581055   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:50.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:36:50.080823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:50.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:50.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:36:50.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:50.581174   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:51.080882   38829 type.go:168] "Request Body" body=""
	I1213 18:36:51.080963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:51.081341   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:51.580687   38829 type.go:168] "Request Body" body=""
	I1213 18:36:51.580761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:51.581057   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:51.581110   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:52.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:36:52.080817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:52.081192   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:52.580893   38829 type.go:168] "Request Body" body=""
	I1213 18:36:52.580986   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:52.581347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:53.080709   38829 type.go:168] "Request Body" body=""
	I1213 18:36:53.080779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:53.081063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:53.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:36:53.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:53.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:53.581240   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:54.081104   38829 type.go:168] "Request Body" body=""
	I1213 18:36:54.081173   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:54.081470   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:54.525326   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:54.580832   38829 type.go:168] "Request Body" body=""
	I1213 18:36:54.580898   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:54.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:54.600652   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:54.600694   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:54.600713   38829 retry.go:31] will retry after 41.212755678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:55.081498   38829 type.go:168] "Request Body" body=""
	I1213 18:36:55.081571   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:55.081915   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:55.580632   38829 type.go:168] "Request Body" body=""
	I1213 18:36:55.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:55.581066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:56.080716   38829 type.go:168] "Request Body" body=""
	I1213 18:36:56.080780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:56.081078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:56.081124   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:56.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:36:56.580847   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:56.581215   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:57.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:36:57.080904   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:57.081246   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:57.580702   38829 type.go:168] "Request Body" body=""
	I1213 18:36:57.580781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:57.581095   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:58.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:36:58.080815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:58.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:58.081230   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:58.580804   38829 type.go:168] "Request Body" body=""
	I1213 18:36:58.580886   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:58.581230   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:59.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:36:59.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:59.081167   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:59.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:36:59.580848   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:59.581262   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:00.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:37:00.081091   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:00.081411   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:00.081460   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:00.580690   38829 type.go:168] "Request Body" body=""
	I1213 18:37:00.580766   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:00.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:01.080673   38829 type.go:168] "Request Body" body=""
	I1213 18:37:01.080760   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:01.081112   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:01.580720   38829 type.go:168] "Request Body" body=""
	I1213 18:37:01.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:01.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:02.080753   38829 type.go:168] "Request Body" body=""
	I1213 18:37:02.080821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:02.081110   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:02.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:37:02.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:02.581155   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:02.581205   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:03.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:37:03.080823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:03.081153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:03.580615   38829 type.go:168] "Request Body" body=""
	I1213 18:37:03.580691   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:03.580974   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:04.080845   38829 type.go:168] "Request Body" body=""
	I1213 18:37:04.080916   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:04.081330   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:04.580902   38829 type.go:168] "Request Body" body=""
	I1213 18:37:04.581002   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:04.581380   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:04.581437   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:05.080788   38829 type.go:168] "Request Body" body=""
	I1213 18:37:05.080867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:05.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:05.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:37:05.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:05.581178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:06.080721   38829 type.go:168] "Request Body" body=""
	I1213 18:37:06.080796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:06.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:06.580658   38829 type.go:168] "Request Body" body=""
	I1213 18:37:06.580727   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:06.581063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:07.080796   38829 type.go:168] "Request Body" body=""
	I1213 18:37:07.080883   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:07.081219   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:07.081280   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:07.580756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:07.580835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:07.581166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.080678   38829 type.go:168] "Request Body" body=""
	I1213 18:37:08.080757   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:08.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.580840   38829 type.go:168] "Request Body" body=""
	I1213 18:37:08.580922   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:08.581286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.646539   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:37:08.707161   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:08.707197   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:37:08.707216   38829 retry.go:31] will retry after 43.904706278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:37:09.080730   38829 type.go:168] "Request Body" body=""
	I1213 18:37:09.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:09.081148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:09.580688   38829 type.go:168] "Request Body" body=""
	I1213 18:37:09.580756   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:09.581080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:09.581129   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:10.080738   38829 type.go:168] "Request Body" body=""
	I1213 18:37:10.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:10.081184   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:10.580752   38829 type.go:168] "Request Body" body=""
	I1213 18:37:10.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:10.581212   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:11.080819   38829 type.go:168] "Request Body" body=""
	I1213 18:37:11.080905   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:11.081275   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:11.580750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:11.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:11.581167   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:11.581218   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:12.080976   38829 type.go:168] "Request Body" body=""
	I1213 18:37:12.081075   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:12.081413   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:12.581163   38829 type.go:168] "Request Body" body=""
	I1213 18:37:12.581239   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:12.581504   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:13.081350   38829 type.go:168] "Request Body" body=""
	I1213 18:37:13.081422   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:13.081759   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:13.581540   38829 type.go:168] "Request Body" body=""
	I1213 18:37:13.581621   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:13.581958   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:13.582012   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:14.080637   38829 type.go:168] "Request Body" body=""
	I1213 18:37:14.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:14.081037   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:14.580751   38829 type.go:168] "Request Body" body=""
	I1213 18:37:14.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:14.581126   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:15.080809   38829 type.go:168] "Request Body" body=""
	I1213 18:37:15.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:15.081289   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:15.580701   38829 type.go:168] "Request Body" body=""
	I1213 18:37:15.580784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:15.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:16.080844   38829 type.go:168] "Request Body" body=""
	I1213 18:37:16.080922   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:16.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:16.081285   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:16.580898   38829 type.go:168] "Request Body" body=""
	I1213 18:37:16.581034   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:16.581399   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:17.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:17.080737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:17.080990   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:17.580692   38829 type.go:168] "Request Body" body=""
	I1213 18:37:17.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:17.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:18.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:18.080868   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:18.081221   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:18.581194   38829 type.go:168] "Request Body" body=""
	I1213 18:37:18.581282   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:18.581589   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:18.581661   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:19.080720   38829 type.go:168] "Request Body" body=""
	I1213 18:37:19.080794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:19.081153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:19.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:37:19.580807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:19.581139   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:20.080683   38829 type.go:168] "Request Body" body=""
	I1213 18:37:20.080783   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:20.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:20.580699   38829 type.go:168] "Request Body" body=""
	I1213 18:37:20.580768   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:20.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:21.080704   38829 type.go:168] "Request Body" body=""
	I1213 18:37:21.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:21.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:21.081200   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:21.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:37:21.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:21.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:22.080770   38829 type.go:168] "Request Body" body=""
	I1213 18:37:22.080878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:22.081249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:22.580823   38829 type.go:168] "Request Body" body=""
	I1213 18:37:22.580919   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:22.581227   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:23.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:37:23.080740   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:23.081069   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:23.580725   38829 type.go:168] "Request Body" body=""
	I1213 18:37:23.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:23.581144   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:23.581194   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:24.081109   38829 type.go:168] "Request Body" body=""
	I1213 18:37:24.081180   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:24.081522   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:24.581618   38829 type.go:168] "Request Body" body=""
	I1213 18:37:24.581687   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:24.582010   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:25.080756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:25.080839   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:25.081197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:25.580943   38829 type.go:168] "Request Body" body=""
	I1213 18:37:25.581038   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:25.581354   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:25.581416   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:26.080723   38829 type.go:168] "Request Body" body=""
	I1213 18:37:26.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:26.081227   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:26.580735   38829 type.go:168] "Request Body" body=""
	I1213 18:37:26.580817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:26.581160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:27.080700   38829 type.go:168] "Request Body" body=""
	I1213 18:37:27.080784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:27.081126   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:27.580667   38829 type.go:168] "Request Body" body=""
	I1213 18:37:27.580751   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:27.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:28.080604   38829 type.go:168] "Request Body" body=""
	I1213 18:37:28.080698   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:28.081045   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:28.081097   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:28.580817   38829 type.go:168] "Request Body" body=""
	I1213 18:37:28.580906   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:28.581222   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:29.080796   38829 type.go:168] "Request Body" body=""
	I1213 18:37:29.080873   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:29.081151   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:29.580777   38829 type.go:168] "Request Body" body=""
	I1213 18:37:29.580870   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:29.581199   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:30.080803   38829 type.go:168] "Request Body" body=""
	I1213 18:37:30.080884   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:30.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:30.081287   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:30.580672   38829 type.go:168] "Request Body" body=""
	I1213 18:37:30.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:30.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:31.081506   38829 type.go:168] "Request Body" body=""
	I1213 18:37:31.081581   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:31.081922   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:31.580645   38829 type.go:168] "Request Body" body=""
	I1213 18:37:31.580718   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:31.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:32.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:32.080783   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:32.081114   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:32.580825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:32.580936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:32.581248   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:32.581295   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:33.080746   38829 type.go:168] "Request Body" body=""
	I1213 18:37:33.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:33.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:33.580676   38829 type.go:168] "Request Body" body=""
	I1213 18:37:33.580750   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:33.581029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:34.081646   38829 type.go:168] "Request Body" body=""
	I1213 18:37:34.081715   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:34.082009   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:34.580682   38829 type.go:168] "Request Body" body=""
	I1213 18:37:34.580780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:34.581134   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:35.080825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:35.080895   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:35.081246   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:35.081298   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:35.580940   38829 type.go:168] "Request Body" body=""
	I1213 18:37:35.581051   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:35.581350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:35.813701   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:37:35.887144   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:35.887179   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:35.887279   38829 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 18:37:36.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:36.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:36.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:36.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:37:36.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:36.581058   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:37.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:37:37.080814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:37.081161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:37.580851   38829 type.go:168] "Request Body" body=""
	I1213 18:37:37.580926   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:37.581239   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:37.581288   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:38.080774   38829 type.go:168] "Request Body" body=""
	I1213 18:37:38.080865   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:38.081305   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:38.581237   38829 type.go:168] "Request Body" body=""
	I1213 18:37:38.581321   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:38.581645   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:39.081533   38829 type.go:168] "Request Body" body=""
	I1213 18:37:39.081612   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:39.081897   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:39.581503   38829 type.go:168] "Request Body" body=""
	I1213 18:37:39.581567   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:39.581828   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:39.581866   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:40.081636   38829 type.go:168] "Request Body" body=""
	I1213 18:37:40.081710   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:40.082035   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:40.580686   38829 type.go:168] "Request Body" body=""
	I1213 18:37:40.580764   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:40.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:41.080659   38829 type.go:168] "Request Body" body=""
	I1213 18:37:41.080744   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:41.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:41.580856   38829 type.go:168] "Request Body" body=""
	I1213 18:37:41.580929   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:41.581268   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:42.080912   38829 type.go:168] "Request Body" body=""
	I1213 18:37:42.081054   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:42.081405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:42.081473   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:42.581188   38829 type.go:168] "Request Body" body=""
	I1213 18:37:42.581268   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:42.581539   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:43.081397   38829 type.go:168] "Request Body" body=""
	I1213 18:37:43.081474   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:43.081823   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:43.581624   38829 type.go:168] "Request Body" body=""
	I1213 18:37:43.581704   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:43.582019   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:44.081168   38829 type.go:168] "Request Body" body=""
	I1213 18:37:44.081243   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:44.081539   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:44.081581   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:44.581405   38829 type.go:168] "Request Body" body=""
	I1213 18:37:44.581481   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:44.581805   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:45.081836   38829 type.go:168] "Request Body" body=""
	I1213 18:37:45.081938   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:45.082358   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:45.580699   38829 type.go:168] "Request Body" body=""
	I1213 18:37:45.580773   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:45.581090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:46.080825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:46.080898   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:46.081231   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:46.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:37:46.580818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:46.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:46.581235   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:47.080684   38829 type.go:168] "Request Body" body=""
	I1213 18:37:47.080759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:47.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:47.580848   38829 type.go:168] "Request Body" body=""
	I1213 18:37:47.580921   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:47.581277   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:48.080712   38829 type.go:168] "Request Body" body=""
	I1213 18:37:48.080804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:48.081135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:48.580811   38829 type.go:168] "Request Body" body=""
	I1213 18:37:48.580882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:48.581154   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:49.081058   38829 type.go:168] "Request Body" body=""
	I1213 18:37:49.081150   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:49.081477   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:49.081542   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:49.581293   38829 type.go:168] "Request Body" body=""
	I1213 18:37:49.581370   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:49.581713   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:50.081496   38829 type.go:168] "Request Body" body=""
	I1213 18:37:50.081562   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:50.081847   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:50.581629   38829 type.go:168] "Request Body" body=""
	I1213 18:37:50.581706   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:50.582071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:51.080700   38829 type.go:168] "Request Body" body=""
	I1213 18:37:51.080790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:51.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:51.580683   38829 type.go:168] "Request Body" body=""
	I1213 18:37:51.580754   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:51.581047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:51.581094   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:52.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:37:52.080787   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:52.081175   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:52.580775   38829 type.go:168] "Request Body" body=""
	I1213 18:37:52.580867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:52.581254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:52.612466   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:37:52.672905   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:52.677070   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:52.677165   38829 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 18:37:52.680309   38829 out.go:179] * Enabled addons: 
	I1213 18:37:52.684021   38829 addons.go:530] duration metric: took 1m54.470472162s for enable addons: enabled=[]
	I1213 18:37:53.081534   38829 type.go:168] "Request Body" body=""
	I1213 18:37:53.081600   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:53.081904   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:53.580635   38829 type.go:168] "Request Body" body=""
	I1213 18:37:53.580711   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:53.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:54.080643   38829 type.go:168] "Request Body" body=""
	I1213 18:37:54.080739   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:54.082029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1213 18:37:54.082091   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:54.581623   38829 type.go:168] "Request Body" body=""
	I1213 18:37:54.581698   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:54.581957   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:55.080687   38829 type.go:168] "Request Body" body=""
	I1213 18:37:55.080780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:55.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:55.580756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:55.580828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:55.581197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:56.080640   38829 type.go:168] "Request Body" body=""
	I1213 18:37:56.080714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:56.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:56.580613   38829 type.go:168] "Request Body" body=""
	I1213 18:37:56.580689   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:56.581045   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:56.581101   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:57.080597   38829 type.go:168] "Request Body" body=""
	I1213 18:37:57.080691   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:57.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:57.580930   38829 type.go:168] "Request Body" body=""
	I1213 18:37:57.581038   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:57.585714   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 18:37:58.081512   38829 type.go:168] "Request Body" body=""
	I1213 18:37:58.081591   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:58.081945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:58.580703   38829 type.go:168] "Request Body" body=""
	I1213 18:37:58.580778   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:58.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:58.581214   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:59.081515   38829 type.go:168] "Request Body" body=""
	I1213 18:37:59.081606   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:59.081931   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:59.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:59.580732   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:59.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:00.080803   38829 type.go:168] "Request Body" body=""
	I1213 18:38:00.080888   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:00.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:00.581619   38829 type.go:168] "Request Body" body=""
	I1213 18:38:00.581690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:00.582027   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:00.582084   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:01.080751   38829 type.go:168] "Request Body" body=""
	I1213 18:38:01.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:01.081194   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:01.580724   38829 type.go:168] "Request Body" body=""
	I1213 18:38:01.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:01.581152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:02.080668   38829 type.go:168] "Request Body" body=""
	I1213 18:38:02.080746   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:02.081102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:02.580776   38829 type.go:168] "Request Body" body=""
	I1213 18:38:02.580850   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:02.581187   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:03.080936   38829 type.go:168] "Request Body" body=""
	I1213 18:38:03.081031   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:03.081349   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:03.081405   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:03.580669   38829 type.go:168] "Request Body" body=""
	I1213 18:38:03.580767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:03.581056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:04.080818   38829 type.go:168] "Request Body" body=""
	I1213 18:38:04.080899   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:04.081235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:04.580930   38829 type.go:168] "Request Body" body=""
	I1213 18:38:04.581025   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:04.581369   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:05.080659   38829 type.go:168] "Request Body" body=""
	I1213 18:38:05.080743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:05.081076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:05.580757   38829 type.go:168] "Request Body" body=""
	I1213 18:38:05.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:05.581176   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:05.581227   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:06.080773   38829 type.go:168] "Request Body" body=""
	I1213 18:38:06.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:06.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:06.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:38:06.580751   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:06.581040   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:07.080776   38829 type.go:168] "Request Body" body=""
	I1213 18:38:07.080848   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:07.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:07.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:07.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:07.581160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:08.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:38:08.080849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:08.081161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:08.081226   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:08.580947   38829 type.go:168] "Request Body" body=""
	I1213 18:38:08.581044   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:08.581405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:09.081557   38829 type.go:168] "Request Body" body=""
	I1213 18:38:09.081630   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:09.081955   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:09.580701   38829 type.go:168] "Request Body" body=""
	I1213 18:38:09.580777   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:09.581100   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:10.080747   38829 type.go:168] "Request Body" body=""
	I1213 18:38:10.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:10.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:10.081288   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:10.580771   38829 type.go:168] "Request Body" body=""
	I1213 18:38:10.580886   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:10.581218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:11.080922   38829 type.go:168] "Request Body" body=""
	I1213 18:38:11.080992   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:11.081274   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:11.581973   38829 type.go:168] "Request Body" body=""
	I1213 18:38:11.582052   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:11.582377   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:12.081104   38829 type.go:168] "Request Body" body=""
	I1213 18:38:12.081179   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:12.081532   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:12.081585   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:12.581355   38829 type.go:168] "Request Body" body=""
	I1213 18:38:12.581430   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:12.581762   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:13.081529   38829 type.go:168] "Request Body" body=""
	I1213 18:38:13.081604   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:13.081921   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:13.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:38:13.580716   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:13.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:14.081616   38829 type.go:168] "Request Body" body=""
	I1213 18:38:14.081703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:14.082037   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:14.082090   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:14.580727   38829 type.go:168] "Request Body" body=""
	I1213 18:38:14.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:14.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:15.080903   38829 type.go:168] "Request Body" body=""
	I1213 18:38:15.080982   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:15.081338   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:15.581041   38829 type.go:168] "Request Body" body=""
	I1213 18:38:15.581119   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:15.581474   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:16.081265   38829 type.go:168] "Request Body" body=""
	I1213 18:38:16.081338   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:16.081665   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:16.581493   38829 type.go:168] "Request Body" body=""
	I1213 18:38:16.581589   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:16.581945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:16.581999   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:17.080642   38829 type.go:168] "Request Body" body=""
	I1213 18:38:17.080713   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:17.080986   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:17.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:38:17.580796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:17.581138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:18.080868   38829 type.go:168] "Request Body" body=""
	I1213 18:38:18.080948   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:18.081331   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:18.581194   38829 type.go:168] "Request Body" body=""
	I1213 18:38:18.581268   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:18.581529   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:19.081522   38829 type.go:168] "Request Body" body=""
	I1213 18:38:19.081598   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:19.081945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:19.082001   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:19.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:38:19.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:19.581171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:20.080873   38829 type.go:168] "Request Body" body=""
	I1213 18:38:20.080948   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:20.081259   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:20.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:38:20.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:20.581178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:21.080749   38829 type.go:168] "Request Body" body=""
	I1213 18:38:21.080849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:21.081219   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:21.580655   38829 type.go:168] "Request Body" body=""
	I1213 18:38:21.580730   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:21.581101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:21.581180   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:22.080740   38829 type.go:168] "Request Body" body=""
	I1213 18:38:22.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:22.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:22.580922   38829 type.go:168] "Request Body" body=""
	I1213 18:38:22.581020   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:22.581389   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:23.080725   38829 type.go:168] "Request Body" body=""
	I1213 18:38:23.080802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:23.081145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:23.580880   38829 type.go:168] "Request Body" body=""
	I1213 18:38:23.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:23.581338   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:23.581392   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:24.081664   38829 type.go:168] "Request Body" body=""
	I1213 18:38:24.081759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:24.082117   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:24.580825   38829 type.go:168] "Request Body" body=""
	I1213 18:38:24.580901   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:24.581233   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:25.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:25.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:25.081203   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:25.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:38:25.580807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:25.581142   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:26.080689   38829 type.go:168] "Request Body" body=""
	I1213 18:38:26.080779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:26.081103   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:26.081156   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:26.580750   38829 type.go:168] "Request Body" body=""
	I1213 18:38:26.580831   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:26.581177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:27.080736   38829 type.go:168] "Request Body" body=""
	I1213 18:38:27.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:27.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:27.580696   38829 type.go:168] "Request Body" body=""
	I1213 18:38:27.580770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:27.581094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:28.080768   38829 type.go:168] "Request Body" body=""
	I1213 18:38:28.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:28.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:28.081197   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:28.581180   38829 type.go:168] "Request Body" body=""
	I1213 18:38:28.581274   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:28.581646   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:29.080821   38829 type.go:168] "Request Body" body=""
	I1213 18:38:29.080892   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:29.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:29.580951   38829 type.go:168] "Request Body" body=""
	I1213 18:38:29.581053   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:29.581390   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:30.080799   38829 type.go:168] "Request Body" body=""
	I1213 18:38:30.080882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:30.081350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:30.081432   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:30.580706   38829 type.go:168] "Request Body" body=""
	I1213 18:38:30.580834   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:30.581124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:31.080774   38829 type.go:168] "Request Body" body=""
	I1213 18:38:31.080864   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:31.081259   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:31.580984   38829 type.go:168] "Request Body" body=""
	I1213 18:38:31.581082   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:31.581450   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:32.080667   38829 type.go:168] "Request Body" body=""
	I1213 18:38:32.080743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:32.081034   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:32.580743   38829 type.go:168] "Request Body" body=""
	I1213 18:38:32.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:32.581200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:32.581255   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:33.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:38:33.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:33.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:33.580725   38829 type.go:168] "Request Body" body=""
	I1213 18:38:33.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:33.581164   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:34.081257   38829 type.go:168] "Request Body" body=""
	I1213 18:38:34.081337   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:34.081668   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:34.581504   38829 type.go:168] "Request Body" body=""
	I1213 18:38:34.581582   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:34.581919   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:34.581974   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:35.080651   38829 type.go:168] "Request Body" body=""
	I1213 18:38:35.080731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:35.081024   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:35.580713   38829 type.go:168] "Request Body" body=""
	I1213 18:38:35.580792   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:35.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:36.080919   38829 type.go:168] "Request Body" body=""
	I1213 18:38:36.080998   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:36.081335   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:36.580681   38829 type.go:168] "Request Body" body=""
	I1213 18:38:36.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:36.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:37.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:38:37.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:37.081165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:37.081218   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:37.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:38:37.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:37.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:38.080691   38829 type.go:168] "Request Body" body=""
	I1213 18:38:38.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:38.081186   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:38.581125   38829 type.go:168] "Request Body" body=""
	I1213 18:38:38.581202   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:38.581601   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:39.081372   38829 type.go:168] "Request Body" body=""
	I1213 18:38:39.081450   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:39.081746   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:39.081795   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:39.581476   38829 type.go:168] "Request Body" body=""
	I1213 18:38:39.581574   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:39.581834   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:40.080652   38829 type.go:168] "Request Body" body=""
	I1213 18:38:40.080736   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:40.081070   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:40.580762   38829 type.go:168] "Request Body" body=""
	I1213 18:38:40.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:40.581170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:41.080790   38829 type.go:168] "Request Body" body=""
	I1213 18:38:41.080859   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:41.081138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:41.580736   38829 type.go:168] "Request Body" body=""
	I1213 18:38:41.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:41.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:41.581213   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:42.081232   38829 type.go:168] "Request Body" body=""
	I1213 18:38:42.081358   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:42.081865   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:42.580689   38829 type.go:168] "Request Body" body=""
	I1213 18:38:42.580771   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:42.581121   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:43.080823   38829 type.go:168] "Request Body" body=""
	I1213 18:38:43.080907   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:43.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:43.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:38:43.580836   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:43.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:44.081575   38829 type.go:168] "Request Body" body=""
	I1213 18:38:44.081651   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:44.081974   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:44.082018   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:44.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:38:44.580850   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:44.581196   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:45.080840   38829 type.go:168] "Request Body" body=""
	I1213 18:38:45.080920   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:45.081286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:45.580954   38829 type.go:168] "Request Body" body=""
	I1213 18:38:45.581055   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:45.581346   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:46.081059   38829 type.go:168] "Request Body" body=""
	I1213 18:38:46.081132   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:46.081421   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:46.581118   38829 type.go:168] "Request Body" body=""
	I1213 18:38:46.581200   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:46.581535   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:46.581590   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:47.081106   38829 type.go:168] "Request Body" body=""
	I1213 18:38:47.081224   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:47.081480   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:47.581264   38829 type.go:168] "Request Body" body=""
	I1213 18:38:47.581336   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:47.581677   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:48.081348   38829 type.go:168] "Request Body" body=""
	I1213 18:38:48.081420   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:48.081786   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:48.580712   38829 type.go:168] "Request Body" body=""
	I1213 18:38:48.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:48.581132   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:49.081267   38829 type.go:168] "Request Body" body=""
	I1213 18:38:49.081338   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:49.081661   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:49.081719   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:49.581307   38829 type.go:168] "Request Body" body=""
	I1213 18:38:49.581390   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:49.581723   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:50.081491   38829 type.go:168] "Request Body" body=""
	I1213 18:38:50.081558   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:50.081836   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:50.581617   38829 type.go:168] "Request Body" body=""
	I1213 18:38:50.581690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:50.582006   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:51.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:51.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:51.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:51.580635   38829 type.go:168] "Request Body" body=""
	I1213 18:38:51.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:51.581040   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:51.581092   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:52.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:52.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:52.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:52.580897   38829 type.go:168] "Request Body" body=""
	I1213 18:38:52.580975   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:52.581319   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:53.081002   38829 type.go:168] "Request Body" body=""
	I1213 18:38:53.081090   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:53.081366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:53.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:38:53.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:53.581210   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:53.581264   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:54.081117   38829 type.go:168] "Request Body" body=""
	I1213 18:38:54.081197   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:54.081547   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:54.581298   38829 type.go:168] "Request Body" body=""
	I1213 18:38:54.581371   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:54.581643   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:55.081403   38829 type.go:168] "Request Body" body=""
	I1213 18:38:55.081482   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:55.081842   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:55.581455   38829 type.go:168] "Request Body" body=""
	I1213 18:38:55.581534   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:55.581851   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:55.581906   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:56.080602   38829 type.go:168] "Request Body" body=""
	I1213 18:38:56.080680   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:56.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:56.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:56.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:56.581197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:57.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:38:57.080844   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:57.081204   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:57.580625   38829 type.go:168] "Request Body" body=""
	I1213 18:38:57.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:57.580967   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:58.080697   38829 type.go:168] "Request Body" body=""
	I1213 18:38:58.080767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:58.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:58.081121   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:58.580746   38829 type.go:168] "Request Body" body=""
	I1213 18:38:58.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:58.581193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:59.080619   38829 type.go:168] "Request Body" body=""
	I1213 18:38:59.080690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:59.080957   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:59.580697   38829 type.go:168] "Request Body" body=""
	I1213 18:38:59.580775   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:59.581075   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:00.080781   38829 type.go:168] "Request Body" body=""
	I1213 18:39:00.080864   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:00.081214   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:00.081263   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:00.580868   38829 type.go:168] "Request Body" body=""
	I1213 18:39:00.580959   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:00.581261   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:01.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:39:01.080795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:01.081160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:01.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:01.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:01.581212   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:02.080885   38829 type.go:168] "Request Body" body=""
	I1213 18:39:02.080961   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:02.081256   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:02.081306   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:02.580741   38829 type.go:168] "Request Body" body=""
	I1213 18:39:02.580818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:02.581177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:03.080736   38829 type.go:168] "Request Body" body=""
	I1213 18:39:03.080810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:03.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:03.580700   38829 type.go:168] "Request Body" body=""
	I1213 18:39:03.580773   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:03.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:04.080632   38829 type.go:168] "Request Body" body=""
	I1213 18:39:04.080714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:04.081077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:04.580778   38829 type.go:168] "Request Body" body=""
	I1213 18:39:04.580863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:04.581243   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:04.581303   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:05.080687   38829 type.go:168] "Request Body" body=""
	I1213 18:39:05.080765   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:05.081059   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:05.580796   38829 type.go:168] "Request Body" body=""
	I1213 18:39:05.580872   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:05.581215   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:06.080727   38829 type.go:168] "Request Body" body=""
	I1213 18:39:06.080803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:06.081158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:06.580837   38829 type.go:168] "Request Body" body=""
	I1213 18:39:06.580917   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:06.581202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:07.080725   38829 type.go:168] "Request Body" body=""
	I1213 18:39:07.080808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:07.081164   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:07.081214   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:07.580716   38829 type.go:168] "Request Body" body=""
	I1213 18:39:07.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:07.581129   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:08.080858   38829 type.go:168] "Request Body" body=""
	I1213 18:39:08.080931   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:08.081213   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:08.581137   38829 type.go:168] "Request Body" body=""
	I1213 18:39:08.581207   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:08.581513   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:09.081065   38829 type.go:168] "Request Body" body=""
	I1213 18:39:09.081139   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:09.081514   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:09.081581   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:09.581276   38829 type.go:168] "Request Body" body=""
	I1213 18:39:09.581342   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:09.581644   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:10.081407   38829 type.go:168] "Request Body" body=""
	I1213 18:39:10.081483   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:10.081851   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:10.581496   38829 type.go:168] "Request Body" body=""
	I1213 18:39:10.581567   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:10.581887   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:11.080629   38829 type.go:168] "Request Body" body=""
	I1213 18:39:11.080701   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:11.081001   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:11.580726   38829 type.go:168] "Request Body" body=""
	I1213 18:39:11.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:11.581121   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:11.581171   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:12.080760   38829 type.go:168] "Request Body" body=""
	I1213 18:39:12.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:12.081152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:12.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:39:12.580744   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:12.581068   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:13.080734   38829 type.go:168] "Request Body" body=""
	I1213 18:39:13.080808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:13.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:13.580863   38829 type.go:168] "Request Body" body=""
	I1213 18:39:13.580937   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:13.581281   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:13.581332   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:14.081577   38829 type.go:168] "Request Body" body=""
	I1213 18:39:14.081653   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:14.081950   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:14.580638   38829 type.go:168] "Request Body" body=""
	I1213 18:39:14.580713   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:14.581046   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:15.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:39:15.080825   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:15.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:15.580864   38829 type.go:168] "Request Body" body=""
	I1213 18:39:15.580936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:15.581210   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:16.080732   38829 type.go:168] "Request Body" body=""
	I1213 18:39:16.080807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:16.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:16.081237   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:16.580894   38829 type.go:168] "Request Body" body=""
	I1213 18:39:16.580969   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:16.581301   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:17.080988   38829 type.go:168] "Request Body" body=""
	I1213 18:39:17.081089   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:17.081420   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:17.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:39:17.580844   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:17.581202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:18.080887   38829 type.go:168] "Request Body" body=""
	I1213 18:39:18.080962   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:18.081285   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:18.081330   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:18.581099   38829 type.go:168] "Request Body" body=""
	I1213 18:39:18.581170   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:18.581423   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:19.081384   38829 type.go:168] "Request Body" body=""
	I1213 18:39:19.081453   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:19.081768   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:19.581414   38829 type.go:168] "Request Body" body=""
	I1213 18:39:19.581490   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:19.581786   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:20.081602   38829 type.go:168] "Request Body" body=""
	I1213 18:39:20.081678   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:20.081965   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:20.082018   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:20.580679   38829 type.go:168] "Request Body" body=""
	I1213 18:39:20.580788   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:20.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:21.080703   38829 type.go:168] "Request Body" body=""
	I1213 18:39:21.080796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:21.081146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:21.580784   38829 type.go:168] "Request Body" body=""
	I1213 18:39:21.580863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:21.581224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:22.080782   38829 type.go:168] "Request Body" body=""
	I1213 18:39:22.080855   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:22.081300   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:22.580762   38829 type.go:168] "Request Body" body=""
	I1213 18:39:22.580835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:22.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:22.581194   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:23.080788   38829 type.go:168] "Request Body" body=""
	I1213 18:39:23.080860   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:23.081193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:23.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:39:23.580820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:23.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:24.081435   38829 type.go:168] "Request Body" body=""
	I1213 18:39:24.081530   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:24.081884   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:24.581587   38829 type.go:168] "Request Body" body=""
	I1213 18:39:24.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:24.581912   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:24.581951   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:25.080657   38829 type.go:168] "Request Body" body=""
	I1213 18:39:25.080734   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:25.081179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:25.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:39:25.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:25.581190   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:26.080869   38829 type.go:168] "Request Body" body=""
	I1213 18:39:26.080936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:26.081224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:26.580741   38829 type.go:168] "Request Body" body=""
	I1213 18:39:26.580814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:26.581148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:27.080703   38829 type.go:168] "Request Body" body=""
	I1213 18:39:27.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:27.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:27.081165   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:27.580724   38829 type.go:168] "Request Body" body=""
	I1213 18:39:27.580797   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:27.581139   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:28.080722   38829 type.go:168] "Request Body" body=""
	I1213 18:39:28.080793   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:28.081199   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:28.580834   38829 type.go:168] "Request Body" body=""
	I1213 18:39:28.580915   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:28.581280   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:29.081285   38829 type.go:168] "Request Body" body=""
	I1213 18:39:29.081351   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:29.081628   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:29.081672   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:29.581065   38829 type.go:168] "Request Body" body=""
	I1213 18:39:29.581140   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:29.581481   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:30.081344   38829 type.go:168] "Request Body" body=""
	I1213 18:39:30.081439   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:30.081896   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:30.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:39:30.580748   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:30.581066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:31.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:39:31.080834   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:31.081162   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:31.580866   38829 type.go:168] "Request Body" body=""
	I1213 18:39:31.580942   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:31.581337   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:31.581394   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:32.080782   38829 type.go:168] "Request Body" body=""
	I1213 18:39:32.080853   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:32.081134   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:32.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:32.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:32.581200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:33.080901   38829 type.go:168] "Request Body" body=""
	I1213 18:39:33.080972   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:33.081318   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:33.580802   38829 type.go:168] "Request Body" body=""
	I1213 18:39:33.580878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:33.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:34.080872   38829 type.go:168] "Request Body" body=""
	I1213 18:39:34.080943   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:34.081303   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:34.081358   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:34.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:39:34.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:34.581136   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:35.080815   38829 type.go:168] "Request Body" body=""
	I1213 18:39:35.080883   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:35.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:35.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:39:35.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:35.581133   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:36.080735   38829 type.go:168] "Request Body" body=""
	I1213 18:39:36.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:36.081172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:36.580859   38829 type.go:168] "Request Body" body=""
	I1213 18:39:36.580941   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:36.581223   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:36.581264   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:37.080720   38829 type.go:168] "Request Body" body=""
	I1213 18:39:37.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:37.081267   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:37.580761   38829 type.go:168] "Request Body" body=""
	I1213 18:39:37.580833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:37.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:38.080809   38829 type.go:168] "Request Body" body=""
	I1213 18:39:38.080881   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:38.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:38.581160   38829 type.go:168] "Request Body" body=""
	I1213 18:39:38.581229   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:38.581546   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:38.581608   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:39.081316   38829 type.go:168] "Request Body" body=""
	I1213 18:39:39.081387   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:39.081699   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:39.581307   38829 type.go:168] "Request Body" body=""
	I1213 18:39:39.581382   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:39.581710   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:40.081503   38829 type.go:168] "Request Body" body=""
	I1213 18:39:40.081578   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:40.081882   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:40.581632   38829 type.go:168] "Request Body" body=""
	I1213 18:39:40.581730   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:40.582090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:40.582139   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:41.080640   38829 type.go:168] "Request Body" body=""
	I1213 18:39:41.080710   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:41.081046   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:41.580670   38829 type.go:168] "Request Body" body=""
	I1213 18:39:41.580748   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:41.581076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:42.080797   38829 type.go:168] "Request Body" body=""
	I1213 18:39:42.080878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:42.081282   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:42.580711   38829 type.go:168] "Request Body" body=""
	I1213 18:39:42.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:42.581132   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:43.080747   38829 type.go:168] "Request Body" body=""
	I1213 18:39:43.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:43.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:43.081283   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:43.580965   38829 type.go:168] "Request Body" body=""
	I1213 18:39:43.581057   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:43.581416   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:44.081437   38829 type.go:168] "Request Body" body=""
	I1213 18:39:44.081507   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:44.081776   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:44.581633   38829 type.go:168] "Request Body" body=""
	I1213 18:39:44.581707   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:44.582020   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:45.080770   38829 type.go:168] "Request Body" body=""
	I1213 18:39:45.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:45.081375   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:45.081434   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:45.581089   38829 type.go:168] "Request Body" body=""
	I1213 18:39:45.581158   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:45.581469   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:46.080755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:46.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:46.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:46.580794   38829 type.go:168] "Request Body" body=""
	I1213 18:39:46.580865   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:46.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:47.080689   38829 type.go:168] "Request Body" body=""
	I1213 18:39:47.080768   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:47.081094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:47.580669   38829 type.go:168] "Request Body" body=""
	I1213 18:39:47.580763   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:47.581109   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:47.581164   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:48.080848   38829 type.go:168] "Request Body" body=""
	I1213 18:39:48.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:48.081228   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:48.581237   38829 type.go:168] "Request Body" body=""
	I1213 18:39:48.581311   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:48.581637   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:49.081081   38829 type.go:168] "Request Body" body=""
	I1213 18:39:49.081164   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:49.081471   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:49.581258   38829 type.go:168] "Request Body" body=""
	I1213 18:39:49.581336   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:49.581617   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:49.581664   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:50.081346   38829 type.go:168] "Request Body" body=""
	I1213 18:39:50.081416   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:50.081693   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:50.581552   38829 type.go:168] "Request Body" body=""
	I1213 18:39:50.581621   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:50.581942   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:51.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:39:51.080806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:51.081235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:51.580885   38829 type.go:168] "Request Body" body=""
	I1213 18:39:51.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:51.581315   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:52.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:39:52.080811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:52.081193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:52.081249   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:52.580704   38829 type.go:168] "Request Body" body=""
	I1213 18:39:52.580784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:52.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:53.080692   38829 type.go:168] "Request Body" body=""
	I1213 18:39:53.080761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:53.081060   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:53.580744   38829 type.go:168] "Request Body" body=""
	I1213 18:39:53.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:53.581232   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:54.081089   38829 type.go:168] "Request Body" body=""
	I1213 18:39:54.081164   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:54.081658   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:54.081712   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:54.581346   38829 type.go:168] "Request Body" body=""
	I1213 18:39:54.581418   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:54.581673   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:55.081499   38829 type.go:168] "Request Body" body=""
	I1213 18:39:55.081596   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:55.081941   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:55.580685   38829 type.go:168] "Request Body" body=""
	I1213 18:39:55.580777   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:55.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:56.080674   38829 type.go:168] "Request Body" body=""
	I1213 18:39:56.080750   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:56.081047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:56.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:39:56.580778   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:56.581204   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:56.581262   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:57.080917   38829 type.go:168] "Request Body" body=""
	I1213 18:39:57.081002   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:57.081366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:57.580664   38829 type.go:168] "Request Body" body=""
	I1213 18:39:57.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:57.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:58.081028   38829 type.go:168] "Request Body" body=""
	I1213 18:39:58.081122   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:58.081478   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:58.581557   38829 type.go:168] "Request Body" body=""
	I1213 18:39:58.581639   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:58.582001   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:58.582075   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:59.081358   38829 type.go:168] "Request Body" body=""
	I1213 18:39:59.081453   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:59.081774   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:59.581595   38829 type.go:168] "Request Body" body=""
	I1213 18:39:59.581667   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:59.581967   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:00.080718   38829 type.go:168] "Request Body" body=""
	I1213 18:40:00.080803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:00.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:00.582760   38829 type.go:168] "Request Body" body=""
	I1213 18:40:00.582857   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:00.583187   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:00.583244   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:01.080684   38829 type.go:168] "Request Body" body=""
	I1213 18:40:01.080755   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:01.081087   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:01.580820   38829 type.go:168] "Request Body" body=""
	I1213 18:40:01.580895   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:01.581240   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:02.080921   38829 type.go:168] "Request Body" body=""
	I1213 18:40:02.080993   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:02.081270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:02.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:02.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:02.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:03.080880   38829 type.go:168] "Request Body" body=""
	I1213 18:40:03.080955   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:03.081306   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:03.081361   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:03.580996   38829 type.go:168] "Request Body" body=""
	I1213 18:40:03.581076   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:03.581335   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:04.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:04.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:04.081183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:04.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:04.580808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:04.581149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:05.080850   38829 type.go:168] "Request Body" body=""
	I1213 18:40:05.080927   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:05.081263   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:05.580963   38829 type.go:168] "Request Body" body=""
	I1213 18:40:05.581056   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:05.581401   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:05.581460   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:06.081245   38829 type.go:168] "Request Body" body=""
	I1213 18:40:06.081316   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:06.081669   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:06.581426   38829 type.go:168] "Request Body" body=""
	I1213 18:40:06.581509   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:06.581848   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:07.081645   38829 type.go:168] "Request Body" body=""
	I1213 18:40:07.081722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:07.082062   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:07.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:07.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:07.581162   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:08.080728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:08.080798   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:08.081088   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:08.081131   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:08.580917   38829 type.go:168] "Request Body" body=""
	I1213 18:40:08.580997   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:08.581369   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:09.081067   38829 type.go:168] "Request Body" body=""
	I1213 18:40:09.081141   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:09.081470   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:09.581192   38829 type.go:168] "Request Body" body=""
	I1213 18:40:09.581258   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:09.581523   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:10.081376   38829 type.go:168] "Request Body" body=""
	I1213 18:40:10.081454   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:10.081809   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:10.081865   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:10.581615   38829 type.go:168] "Request Body" body=""
	I1213 18:40:10.581696   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:10.582036   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:11.080690   38829 type.go:168] "Request Body" body=""
	I1213 18:40:11.080762   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:11.081125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:11.580814   38829 type.go:168] "Request Body" body=""
	I1213 18:40:11.580891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:11.581233   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:12.080745   38829 type.go:168] "Request Body" body=""
	I1213 18:40:12.080820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:12.081174   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:12.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:40:12.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:12.581118   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:12.581177   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:13.080870   38829 type.go:168] "Request Body" body=""
	I1213 18:40:13.080953   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:13.081298   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:13.580990   38829 type.go:168] "Request Body" body=""
	I1213 18:40:13.581130   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:13.581452   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:14.081563   38829 type.go:168] "Request Body" body=""
	I1213 18:40:14.081631   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:14.081949   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:14.580642   38829 type.go:168] "Request Body" body=""
	I1213 18:40:14.580724   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:14.581092   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:15.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:40:15.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:15.081138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:15.081197   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:15.580905   38829 type.go:168] "Request Body" body=""
	I1213 18:40:15.580977   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:15.581270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:16.080728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:16.080801   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:16.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:16.580745   38829 type.go:168] "Request Body" body=""
	I1213 18:40:16.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:16.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:17.080854   38829 type.go:168] "Request Body" body=""
	I1213 18:40:17.080925   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:17.081196   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:17.081236   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:17.580885   38829 type.go:168] "Request Body" body=""
	I1213 18:40:17.580960   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:17.581311   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:18.081048   38829 type.go:168] "Request Body" body=""
	I1213 18:40:18.081128   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:18.081456   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:18.581421   38829 type.go:168] "Request Body" body=""
	I1213 18:40:18.581495   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:18.581752   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:19.081269   38829 type.go:168] "Request Body" body=""
	I1213 18:40:19.081345   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:19.081667   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:19.081723   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:19.581465   38829 type.go:168] "Request Body" body=""
	I1213 18:40:19.581546   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:19.581834   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:20.081620   38829 type.go:168] "Request Body" body=""
	I1213 18:40:20.081707   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:20.082023   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:20.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:40:20.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:20.581185   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:21.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:40:21.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:21.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:21.580880   38829 type.go:168] "Request Body" body=""
	I1213 18:40:21.580954   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:21.581229   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:21.581273   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:22.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:40:22.080802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:22.081186   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:22.580892   38829 type.go:168] "Request Body" body=""
	I1213 18:40:22.580971   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:22.581314   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:23.080852   38829 type.go:168] "Request Body" body=""
	I1213 18:40:23.080921   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:23.081254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:23.580738   38829 type.go:168] "Request Body" body=""
	I1213 18:40:23.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:23.581213   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:24.080992   38829 type.go:168] "Request Body" body=""
	I1213 18:40:24.081086   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:24.081439   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:24.081493   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:24.581181   38829 type.go:168] "Request Body" body=""
	I1213 18:40:24.581254   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:24.581518   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:25.081519   38829 type.go:168] "Request Body" body=""
	I1213 18:40:25.081638   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:25.082066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:25.580956   38829 type.go:168] "Request Body" body=""
	I1213 18:40:25.581049   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:25.581403   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:26.081103   38829 type.go:168] "Request Body" body=""
	I1213 18:40:26.081188   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:26.081496   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:26.081544   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:26.581271   38829 type.go:168] "Request Body" body=""
	I1213 18:40:26.581346   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:26.581679   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:27.081463   38829 type.go:168] "Request Body" body=""
	I1213 18:40:27.081544   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:27.081845   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:27.581582   38829 type.go:168] "Request Body" body=""
	I1213 18:40:27.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:27.581970   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:28.080670   38829 type.go:168] "Request Body" body=""
	I1213 18:40:28.080746   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:28.081095   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:28.580759   38829 type.go:168] "Request Body" body=""
	I1213 18:40:28.580833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:28.581189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:28.581244   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:29.080966   38829 type.go:168] "Request Body" body=""
	I1213 18:40:29.081057   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:29.081325   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:29.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:29.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:29.581235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:30.080981   38829 type.go:168] "Request Body" body=""
	I1213 18:40:30.081106   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:30.081499   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:30.581288   38829 type.go:168] "Request Body" body=""
	I1213 18:40:30.581365   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:30.581686   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:30.581744   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:31.081563   38829 type.go:168] "Request Body" body=""
	I1213 18:40:31.081643   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:31.081985   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:31.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:40:31.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:31.581128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:32.080686   38829 type.go:168] "Request Body" body=""
	I1213 18:40:32.080759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:32.081089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:32.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:40:32.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:32.581153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:33.080697   38829 type.go:168] "Request Body" body=""
	I1213 18:40:33.080771   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:33.081078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:33.081125   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:33.580695   38829 type.go:168] "Request Body" body=""
	I1213 18:40:33.580776   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:33.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:34.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:40:34.080785   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:34.081116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:34.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:34.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:34.581135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:35.080858   38829 type.go:168] "Request Body" body=""
	I1213 18:40:35.080940   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:35.081258   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:35.081316   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:35.580736   38829 type.go:168] "Request Body" body=""
	I1213 18:40:35.580819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:35.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:36.080905   38829 type.go:168] "Request Body" body=""
	I1213 18:40:36.080982   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:36.081405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:36.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:40:36.580780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:36.581071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:37.080758   38829 type.go:168] "Request Body" body=""
	I1213 18:40:37.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:37.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:37.580742   38829 type.go:168] "Request Body" body=""
	I1213 18:40:37.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:37.581185   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:37.581240   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:38.080845   38829 type.go:168] "Request Body" body=""
	I1213 18:40:38.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:38.081284   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:38.580992   38829 type.go:168] "Request Body" body=""
	I1213 18:40:38.581079   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:38.581427   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:39.081037   38829 type.go:168] "Request Body" body=""
	I1213 18:40:39.081109   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:39.081425   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:39.580691   38829 type.go:168] "Request Body" body=""
	I1213 18:40:39.580779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:39.581096   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:40.080864   38829 type.go:168] "Request Body" body=""
	I1213 18:40:40.080952   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:40.081316   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:40.081370   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:40.581072   38829 type.go:168] "Request Body" body=""
	I1213 18:40:40.581147   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:40.581455   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:41.080649   38829 type.go:168] "Request Body" body=""
	I1213 18:40:41.080720   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:41.080968   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:41.580717   38829 type.go:168] "Request Body" body=""
	I1213 18:40:41.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:41.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:42.080793   38829 type.go:168] "Request Body" body=""
	I1213 18:40:42.080889   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:42.081224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:42.580774   38829 type.go:168] "Request Body" body=""
	I1213 18:40:42.580846   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:42.581129   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:42.581171   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:43.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:40:43.080889   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:43.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:43.580912   38829 type.go:168] "Request Body" body=""
	I1213 18:40:43.581022   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:43.581350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:44.081100   38829 type.go:168] "Request Body" body=""
	I1213 18:40:44.081184   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:44.081466   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:44.581295   38829 type.go:168] "Request Body" body=""
	I1213 18:40:44.581368   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:44.581680   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:44.581735   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:45.081574   38829 type.go:168] "Request Body" body=""
	I1213 18:40:45.081671   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:45.082057   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:45.580753   38829 type.go:168] "Request Body" body=""
	I1213 18:40:45.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:45.581123   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:46.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:40:46.080807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:46.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:46.580875   38829 type.go:168] "Request Body" body=""
	I1213 18:40:46.580954   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:46.581347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:47.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:40:47.080843   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:47.081169   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:47.081222   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:47.580721   38829 type.go:168] "Request Body" body=""
	I1213 18:40:47.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:47.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:48.080733   38829 type.go:168] "Request Body" body=""
	I1213 18:40:48.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:48.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:48.581574   38829 type.go:168] "Request Body" body=""
	I1213 18:40:48.581646   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:48.581923   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:49.080895   38829 type.go:168] "Request Body" body=""
	I1213 18:40:49.080969   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:49.081284   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:49.081332   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:49.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:49.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:49.581189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:50.080877   38829 type.go:168] "Request Body" body=""
	I1213 18:40:50.080951   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:50.081313   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:50.580740   38829 type.go:168] "Request Body" body=""
	I1213 18:40:50.580817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:50.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:51.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:40:51.080811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:51.081140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:51.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:51.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:51.581094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:51.581147   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:52.080738   38829 type.go:168] "Request Body" body=""
	I1213 18:40:52.080814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:52.081156   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:52.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:40:52.580781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:52.581124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:53.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:53.080737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:53.081101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:53.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:53.580737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:53.581073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:54.081075   38829 type.go:168] "Request Body" body=""
	I1213 18:40:54.081153   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:54.081490   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:54.081544   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:54.580688   38829 type.go:168] "Request Body" body=""
	I1213 18:40:54.580770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:54.581090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:55.080755   38829 type.go:168] "Request Body" body=""
	I1213 18:40:55.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:55.081218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:55.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:55.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:55.581128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:56.080828   38829 type.go:168] "Request Body" body=""
	I1213 18:40:56.080907   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:56.081254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:56.580945   38829 type.go:168] "Request Body" body=""
	I1213 18:40:56.581061   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:56.581383   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:56.581438   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:57.081145   38829 type.go:168] "Request Body" body=""
	I1213 18:40:57.081219   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:57.081499   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:57.581369   38829 type.go:168] "Request Body" body=""
	I1213 18:40:57.581461   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:57.581753   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:58.081564   38829 type.go:168] "Request Body" body=""
	I1213 18:40:58.081635   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:58.081964   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:58.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:40:58.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:58.581151   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:59.081182   38829 type.go:168] "Request Body" body=""
	I1213 18:40:59.081258   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:59.081514   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:59.081555   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:59.581349   38829 type.go:168] "Request Body" body=""
	I1213 18:40:59.581423   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:59.581720   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:00.081815   38829 type.go:168] "Request Body" body=""
	I1213 18:41:00.081903   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:00.082221   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:00.581646   38829 type.go:168] "Request Body" body=""
	I1213 18:41:00.581716   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:00.582021   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:01.080712   38829 type.go:168] "Request Body" body=""
	I1213 18:41:01.080792   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:01.081087   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:01.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:41:01.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:01.581320   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:01.581376   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:02.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:41:02.080888   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:02.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:02.580849   38829 type.go:168] "Request Body" body=""
	I1213 18:41:02.580920   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:02.581274   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:03.080853   38829 type.go:168] "Request Body" body=""
	I1213 18:41:03.080929   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:03.081297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:03.580687   38829 type.go:168] "Request Body" body=""
	I1213 18:41:03.580761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:03.581113   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:04.080818   38829 type.go:168] "Request Body" body=""
	I1213 18:41:04.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:04.081231   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:04.081279   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:04.580784   38829 type.go:168] "Request Body" body=""
	I1213 18:41:04.580861   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:04.581254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:05.080702   38829 type.go:168] "Request Body" body=""
	I1213 18:41:05.080774   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:05.081067   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:05.580726   38829 type.go:168] "Request Body" body=""
	I1213 18:41:05.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:05.581149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:06.080754   38829 type.go:168] "Request Body" body=""
	I1213 18:41:06.080824   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:06.081183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:06.580809   38829 type.go:168] "Request Body" body=""
	I1213 18:41:06.580876   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:06.581193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:06.581275   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:07.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:41:07.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:07.081155   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:07.580864   38829 type.go:168] "Request Body" body=""
	I1213 18:41:07.580935   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:07.581293   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:08.080815   38829 type.go:168] "Request Body" body=""
	I1213 18:41:08.080882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:08.081228   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:08.581184   38829 type.go:168] "Request Body" body=""
	I1213 18:41:08.581267   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:08.581600   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:08.581650   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:09.081329   38829 type.go:168] "Request Body" body=""
	I1213 18:41:09.081400   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:09.081701   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:09.581386   38829 type.go:168] "Request Body" body=""
	I1213 18:41:09.581459   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:09.581736   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:10.081624   38829 type.go:168] "Request Body" body=""
	I1213 18:41:10.081709   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:10.082054   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:10.580758   38829 type.go:168] "Request Body" body=""
	I1213 18:41:10.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:10.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:11.080690   38829 type.go:168] "Request Body" body=""
	I1213 18:41:11.080767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:11.081130   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:11.081225   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:11.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:41:11.580838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:11.581297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:12.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:41:12.081129   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:12.081449   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:12.581247   38829 type.go:168] "Request Body" body=""
	I1213 18:41:12.581315   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:12.581576   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:13.080944   38829 type.go:168] "Request Body" body=""
	I1213 18:41:13.081031   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:13.081378   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:13.081435   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:13.580973   38829 type.go:168] "Request Body" body=""
	I1213 18:41:13.581116   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:13.581497   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:14.081648   38829 type.go:168] "Request Body" body=""
	I1213 18:41:14.081731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:14.082000   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:14.580709   38829 type.go:168] "Request Body" body=""
	I1213 18:41:14.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:14.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:15.080870   38829 type.go:168] "Request Body" body=""
	I1213 18:41:15.080947   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:15.081336   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:15.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:41:15.580729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:15.581047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:15.581086   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:16.080721   38829 type.go:168] "Request Body" body=""
	I1213 18:41:16.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:16.081148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:16.580760   38829 type.go:168] "Request Body" body=""
	I1213 18:41:16.580840   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:16.581166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:17.080685   38829 type.go:168] "Request Body" body=""
	I1213 18:41:17.080772   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:17.081106   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:17.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:17.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:17.581116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:17.581162   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:18.080745   38829 type.go:168] "Request Body" body=""
	I1213 18:41:18.080820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:18.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:18.581224   38829 type.go:168] "Request Body" body=""
	I1213 18:41:18.581296   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:18.581580   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:19.081352   38829 type.go:168] "Request Body" body=""
	I1213 18:41:19.081427   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:19.081734   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:19.581454   38829 type.go:168] "Request Body" body=""
	I1213 18:41:19.581571   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:19.581908   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:19.581960   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:20.081575   38829 type.go:168] "Request Body" body=""
	I1213 18:41:20.081653   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:20.081930   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:20.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:41:20.580722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:20.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:21.080807   38829 type.go:168] "Request Body" body=""
	I1213 18:41:21.080885   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:21.081222   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:21.580675   38829 type.go:168] "Request Body" body=""
	I1213 18:41:21.580755   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:21.581125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:22.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:41:22.080789   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:22.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:22.081174   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:22.580748   38829 type.go:168] "Request Body" body=""
	I1213 18:41:22.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:22.581169   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:23.080686   38829 type.go:168] "Request Body" body=""
	I1213 18:41:23.080758   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:23.081067   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:23.580652   38829 type.go:168] "Request Body" body=""
	I1213 18:41:23.580733   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:23.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:24.081615   38829 type.go:168] "Request Body" body=""
	I1213 18:41:24.081701   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:24.082028   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:24.082086   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:24.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:41:24.580790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:24.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:25.080723   38829 type.go:168] "Request Body" body=""
	I1213 18:41:25.080800   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:25.081135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:25.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:41:25.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:25.581183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:26.080778   38829 type.go:168] "Request Body" body=""
	I1213 18:41:26.080846   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:26.081178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:26.580887   38829 type.go:168] "Request Body" body=""
	I1213 18:41:26.580963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:26.581315   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:26.581370   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:27.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:41:27.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:27.081128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:27.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:27.580741   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:27.581056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:28.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:41:28.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:28.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:28.580902   38829 type.go:168] "Request Body" body=""
	I1213 18:41:28.580974   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:28.581301   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:29.080749   38829 type.go:168] "Request Body" body=""
	I1213 18:41:29.080817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:29.081091   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:29.081132   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:29.580839   38829 type.go:168] "Request Body" body=""
	I1213 18:41:29.580981   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:29.581329   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:30.080766   38829 type.go:168] "Request Body" body=""
	I1213 18:41:30.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:30.081270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:30.580990   38829 type.go:168] "Request Body" body=""
	I1213 18:41:30.581076   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:30.581343   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:31.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:41:31.080787   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:31.081149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:31.081200   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:31.580852   38829 type.go:168] "Request Body" body=""
	I1213 18:41:31.580935   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:31.581309   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:32.080976   38829 type.go:168] "Request Body" body=""
	I1213 18:41:32.081071   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:32.081376   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:32.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:41:32.580812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:32.581179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:33.080899   38829 type.go:168] "Request Body" body=""
	I1213 18:41:33.080979   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:33.081353   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:33.081413   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:33.580694   38829 type.go:168] "Request Body" body=""
	I1213 18:41:33.580774   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:33.581069   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:34.081613   38829 type.go:168] "Request Body" body=""
	I1213 18:41:34.081689   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:34.082033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:34.580727   38829 type.go:168] "Request Body" body=""
	I1213 18:41:34.580828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:34.581146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:35.080790   38829 type.go:168] "Request Body" body=""
	I1213 18:41:35.080863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:35.081157   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:35.580696   38829 type.go:168] "Request Body" body=""
	I1213 18:41:35.580790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:35.581078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:35.581121   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:36.080756   38829 type.go:168] "Request Body" body=""
	I1213 18:41:36.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:36.081282   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:36.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:36.580739   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:36.581032   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:37.080757   38829 type.go:168] "Request Body" body=""
	I1213 18:41:37.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:37.081179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:37.580859   38829 type.go:168] "Request Body" body=""
	I1213 18:41:37.580931   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:37.581253   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:37.581299   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:38.080940   38829 type.go:168] "Request Body" body=""
	I1213 18:41:38.081033   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:38.081302   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:38.581248   38829 type.go:168] "Request Body" body=""
	I1213 18:41:38.581332   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:38.581671   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:39.081578   38829 type.go:168] "Request Body" body=""
	I1213 18:41:39.081659   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:39.081987   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:39.580653   38829 type.go:168] "Request Body" body=""
	I1213 18:41:39.580729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:39.581076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:40.080757   38829 type.go:168] "Request Body" body=""
	I1213 18:41:40.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:40.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:40.081257   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:40.580739   38829 type.go:168] "Request Body" body=""
	I1213 18:41:40.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:40.581120   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:41.080675   38829 type.go:168] "Request Body" body=""
	I1213 18:41:41.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:41.081085   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:41.580789   38829 type.go:168] "Request Body" body=""
	I1213 18:41:41.580862   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:41.581170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:42.080802   38829 type.go:168] "Request Body" body=""
	I1213 18:41:42.080877   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:42.081216   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:42.580919   38829 type.go:168] "Request Body" body=""
	I1213 18:41:42.580994   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:42.581286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:42.581339   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:43.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:41:43.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:43.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:43.580933   38829 type.go:168] "Request Body" body=""
	I1213 18:41:43.581025   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:43.581344   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:44.081112   38829 type.go:168] "Request Body" body=""
	I1213 18:41:44.081178   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:44.081445   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:44.581279   38829 type.go:168] "Request Body" body=""
	I1213 18:41:44.581350   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:44.581653   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:44.581708   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:45.081520   38829 type.go:168] "Request Body" body=""
	I1213 18:41:45.081600   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:45.081937   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:45.580652   38829 type.go:168] "Request Body" body=""
	I1213 18:41:45.580731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:45.581051   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:46.080751   38829 type.go:168] "Request Body" body=""
	I1213 18:41:46.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:46.081265   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:46.580968   38829 type.go:168] "Request Body" body=""
	I1213 18:41:46.581065   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:46.581388   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:47.080619   38829 type.go:168] "Request Body" body=""
	I1213 18:41:47.080685   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:47.080942   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:47.080980   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:47.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:47.580743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:47.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:48.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:41:48.080842   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:48.081166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:48.581104   38829 type.go:168] "Request Body" body=""
	I1213 18:41:48.581172   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:48.581434   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:49.081502   38829 type.go:168] "Request Body" body=""
	I1213 18:41:49.081574   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:49.081903   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:49.081968   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:49.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:41:49.580722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:49.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:50.080709   38829 type.go:168] "Request Body" body=""
	I1213 18:41:50.080785   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:50.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:50.580720   38829 type.go:168] "Request Body" body=""
	I1213 18:41:50.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:50.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:51.080888   38829 type.go:168] "Request Body" body=""
	I1213 18:41:51.080963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:51.081279   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:51.580674   38829 type.go:168] "Request Body" body=""
	I1213 18:41:51.580740   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:51.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:51.581128   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:52.080773   38829 type.go:168] "Request Body" body=""
	I1213 18:41:52.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:52.081249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:52.580793   38829 type.go:168] "Request Body" body=""
	I1213 18:41:52.580867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:52.581218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:53.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:41:53.080781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:53.081080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:53.580683   38829 type.go:168] "Request Body" body=""
	I1213 18:41:53.580763   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:53.581106   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:53.581159   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:54.080735   38829 type.go:168] "Request Body" body=""
	I1213 18:41:54.080815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:54.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:54.580662   38829 type.go:168] "Request Body" body=""
	I1213 18:41:54.580733   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:54.581088   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:55.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:55.080791   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:55.081154   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:55.580764   38829 type.go:168] "Request Body" body=""
	I1213 18:41:55.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:55.581137   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:55.581182   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:56.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:41:56.080790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:56.081130   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:56.580729   38829 type.go:168] "Request Body" body=""
	I1213 18:41:56.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:56.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:57.080852   38829 type.go:168] "Request Body" body=""
	I1213 18:41:57.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:57.081256   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:57.580921   38829 type.go:168] "Request Body" body=""
	I1213 18:41:57.581000   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:57.581269   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:57.581307   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:58.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:41:58.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:58.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:58.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:58.580799   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:58.581146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:59.081521   38829 type.go:168] "Request Body" body=""
	I1213 18:41:59.081580   38829 node_ready.go:38] duration metric: took 6m0.001077775s for node "functional-752103" to be "Ready" ...
	I1213 18:41:59.084666   38829 out.go:203] 
	W1213 18:41:59.087601   38829 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 18:41:59.087625   38829 out.go:285] * 
	W1213 18:41:59.089766   38829 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:41:59.092666   38829 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.945290742Z" level=info msg="Using the internal default seccomp profile"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.945420457Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.945474053Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.945524113Z" level=info msg="RDT not available in the host system"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.945586898Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.946503864Z" level=info msg="Conmon does support the --sync option"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.946586137Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.946650473Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.947460157Z" level=info msg="Conmon does support the --sync option"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.947581732Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.947794656Z" level=info msg="Updated default CNI network name to "
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.948548372Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.949209238Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.949399688Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998014506Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998049287Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998091544Z" level=info msg="Create NRI interface"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998182883Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998191835Z" level=info msg="runtime interface created"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998201903Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998208114Z" level=info msg="runtime interface starting up..."
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.99821394Z" level=info msg="starting plugins..."
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998225148Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 18:35:56 functional-752103 crio[5390]: time="2025-12-13T18:35:56.998287072Z" level=info msg="No systemd watchdog enabled"
	Dec 13 18:35:57 functional-752103 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:42:03.880839    8767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:03.881496    8767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:03.883070    8767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:03.883620    8767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:03.885168    8767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 18:42:03 up  1:24,  0 user,  load average: 0.45, 0.33, 0.44
	Linux functional-752103 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 18:42:01 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:42:02 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1132.
	Dec 13 18:42:02 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:02 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:02 functional-752103 kubelet[8642]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:02 functional-752103 kubelet[8642]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:02 functional-752103 kubelet[8642]: E1213 18:42:02.157423    8642 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:42:02 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:42:02 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:42:02 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1133.
	Dec 13 18:42:02 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:02 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:02 functional-752103 kubelet[8675]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:02 functional-752103 kubelet[8675]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:02 functional-752103 kubelet[8675]: E1213 18:42:02.892115    8675 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:42:02 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:42:02 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:42:03 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1134.
	Dec 13 18:42:03 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:03 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:03 functional-752103 kubelet[8700]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:03 functional-752103 kubelet[8700]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:03 functional-752103 kubelet[8700]: E1213 18:42:03.643103    8700 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:42:03 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:42:03 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103: exit status 2 (364.776563ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-752103" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 kubectl -- --context functional-752103 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 kubectl -- --context functional-752103 get pods: exit status 1 (109.285154ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-752103 kubectl -- --context functional-752103 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-752103
helpers_test.go:244: (dbg) docker inspect functional-752103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	        "Created": "2025-12-13T18:27:36.869398923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:27:36.933863328Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hosts",
	        "LogPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b-json.log",
	        "Name": "/functional-752103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-752103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-752103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	                "LowerDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-752103",
	                "Source": "/var/lib/docker/volumes/functional-752103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-752103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-752103",
	                "name.minikube.sigs.k8s.io": "functional-752103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625ea12887c8956887678f2408d6edd5b98f62bce458a6906f4f662a3001a53b",
	            "SandboxKey": "/var/run/docker/netns/625ea12887c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-752103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:2c:83:4a:30:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84df48e9f7dac8c6a1b67709e5eea216d99d3f16eb50b96c7f0e4a82b3193d56",
	                    "EndpointID": "e69b1f9610d40396647a2d78f0170c31b9cd8e641fc8465e742649cccee8e591",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-752103",
	                        "d72b547cdcc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103: exit status 2 (321.144202ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-752103 logs -n 25: (1.077824202s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-350101 image ls --format yaml --alsologtostderr                                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image ls --format short --alsologtostderr                                                                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh     │ functional-350101 ssh pgrep buildkitd                                                                                                             │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	│ image   │ functional-350101 image ls --format json --alsologtostderr                                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image ls --format table --alsologtostderr                                                                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image build -t localhost/my-image:functional-350101 testdata/build --alsologtostderr                                            │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image ls                                                                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ delete  │ -p functional-350101                                                                                                                              │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ start   │ -p functional-752103 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	│ start   │ -p functional-752103 --alsologtostderr -v=8                                                                                                       │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:35 UTC │                     │
	│ cache   │ functional-752103 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache add registry.k8s.io/pause:latest                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache add minikube-local-cache-test:functional-752103                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache delete minikube-local-cache-test:functional-752103                                                                        │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl images                                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │                     │
	│ cache   │ functional-752103 cache reload                                                                                                                    │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ kubectl │ functional-752103 kubectl -- --context functional-752103 get pods                                                                                 │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:35:53
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:35:53.999245   38829 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:35:53.999434   38829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:35:53.999464   38829 out.go:374] Setting ErrFile to fd 2...
	I1213 18:35:53.999486   38829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:35:53.999778   38829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:35:54.000250   38829 out.go:368] Setting JSON to false
	I1213 18:35:54.001308   38829 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4706,"bootTime":1765646248,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:35:54.001457   38829 start.go:143] virtualization:  
	I1213 18:35:54.010388   38829 out.go:179] * [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:35:54.014157   38829 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:35:54.014353   38829 notify.go:221] Checking for updates...
	I1213 18:35:54.020075   38829 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:35:54.023186   38829 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:54.026171   38829 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:35:54.029213   38829 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:35:54.032235   38829 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:35:54.035744   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:54.035909   38829 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:35:54.059624   38829 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:35:54.059744   38829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:35:54.127464   38829 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:35:54.118134446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:35:54.127571   38829 docker.go:319] overlay module found
	I1213 18:35:54.130605   38829 out.go:179] * Using the docker driver based on existing profile
	I1213 18:35:54.133521   38829 start.go:309] selected driver: docker
	I1213 18:35:54.133548   38829 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:54.133668   38829 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:35:54.133779   38829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:35:54.194306   38829 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:35:54.184244205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:35:54.194716   38829 cni.go:84] Creating CNI manager for ""
	I1213 18:35:54.194772   38829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:35:54.194827   38829 start.go:353] cluster config:
	{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:54.197953   38829 out.go:179] * Starting "functional-752103" primary control-plane node in "functional-752103" cluster
	I1213 18:35:54.200965   38829 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:35:54.203964   38829 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:35:54.207111   38829 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:35:54.207169   38829 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 18:35:54.207189   38829 cache.go:65] Caching tarball of preloaded images
	I1213 18:35:54.207200   38829 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:35:54.207268   38829 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 18:35:54.207278   38829 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 18:35:54.207380   38829 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/config.json ...
	I1213 18:35:54.226684   38829 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 18:35:54.226707   38829 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 18:35:54.226736   38829 cache.go:243] Successfully downloaded all kic artifacts
	I1213 18:35:54.226765   38829 start.go:360] acquireMachinesLock for functional-752103: {Name:mkf4ec1d9e1836ef54983db4562aedfd1a9c51c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 18:35:54.226834   38829 start.go:364] duration metric: took 45.136µs to acquireMachinesLock for "functional-752103"
	I1213 18:35:54.226856   38829 start.go:96] Skipping create...Using existing machine configuration
	I1213 18:35:54.226865   38829 fix.go:54] fixHost starting: 
	I1213 18:35:54.227126   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:54.245088   38829 fix.go:112] recreateIfNeeded on functional-752103: state=Running err=<nil>
	W1213 18:35:54.245125   38829 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 18:35:54.248193   38829 out.go:252] * Updating the running docker "functional-752103" container ...
	I1213 18:35:54.248225   38829 machine.go:94] provisionDockerMachine start ...
	I1213 18:35:54.248302   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.265418   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.265750   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.265765   38829 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 18:35:54.412628   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:35:54.412654   38829 ubuntu.go:182] provisioning hostname "functional-752103"
	I1213 18:35:54.412716   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.431532   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.431834   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.431851   38829 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-752103 && echo "functional-752103" | sudo tee /etc/hostname
	I1213 18:35:54.592050   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:35:54.592214   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.614592   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.614908   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.614930   38829 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-752103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-752103/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-752103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 18:35:54.769516   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 18:35:54.769546   38829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 18:35:54.769572   38829 ubuntu.go:190] setting up certificates
	I1213 18:35:54.769581   38829 provision.go:84] configureAuth start
	I1213 18:35:54.769640   38829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:35:54.787462   38829 provision.go:143] copyHostCerts
	I1213 18:35:54.787509   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:35:54.787551   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 18:35:54.787563   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:35:54.787650   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 18:35:54.787740   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:35:54.787760   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 18:35:54.787765   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:35:54.787800   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 18:35:54.787845   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:35:54.787868   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 18:35:54.787877   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:35:54.787902   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 18:35:54.787955   38829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.functional-752103 san=[127.0.0.1 192.168.49.2 functional-752103 localhost minikube]
	I1213 18:35:54.878725   38829 provision.go:177] copyRemoteCerts
	I1213 18:35:54.878794   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 18:35:54.878839   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.895961   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.009601   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 18:35:55.009696   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 18:35:55.033852   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 18:35:55.033923   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 18:35:55.052749   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 18:35:55.052813   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 18:35:55.072069   38829 provision.go:87] duration metric: took 302.464055ms to configureAuth
	I1213 18:35:55.072107   38829 ubuntu.go:206] setting minikube options for container-runtime
	I1213 18:35:55.072313   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:55.072426   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.092406   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:55.092745   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:55.092771   38829 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 18:35:55.413226   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 18:35:55.413251   38829 machine.go:97] duration metric: took 1.16501875s to provisionDockerMachine
	I1213 18:35:55.413264   38829 start.go:293] postStartSetup for "functional-752103" (driver="docker")
	I1213 18:35:55.413300   38829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 18:35:55.413403   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 18:35:55.413470   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.430709   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.537093   38829 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 18:35:55.540324   38829 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 18:35:55.540345   38829 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 18:35:55.540349   38829 command_runner.go:130] > VERSION_ID="12"
	I1213 18:35:55.540354   38829 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 18:35:55.540359   38829 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 18:35:55.540363   38829 command_runner.go:130] > ID=debian
	I1213 18:35:55.540368   38829 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 18:35:55.540373   38829 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 18:35:55.540379   38829 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 18:35:55.540743   38829 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 18:35:55.540767   38829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 18:35:55.540779   38829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 18:35:55.540839   38829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 18:35:55.540926   38829 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 18:35:55.540938   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 18:35:55.541035   38829 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> hosts in /etc/test/nested/copy/4637
	I1213 18:35:55.541044   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> /etc/test/nested/copy/4637/hosts
	I1213 18:35:55.541087   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4637
	I1213 18:35:55.548955   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:35:55.566460   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts --> /etc/test/nested/copy/4637/hosts (40 bytes)
	I1213 18:35:55.584163   38829 start.go:296] duration metric: took 170.869499ms for postStartSetup
	I1213 18:35:55.584240   38829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 18:35:55.584294   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.601966   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.706486   38829 command_runner.go:130] > 11%
	I1213 18:35:55.706569   38829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 18:35:55.711597   38829 command_runner.go:130] > 174G
	I1213 18:35:55.711643   38829 fix.go:56] duration metric: took 1.484775946s for fixHost
	I1213 18:35:55.711654   38829 start.go:83] releasing machines lock for "functional-752103", held for 1.484809349s
	I1213 18:35:55.711733   38829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:35:55.731505   38829 ssh_runner.go:195] Run: cat /version.json
	I1213 18:35:55.731524   38829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 18:35:55.731557   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.731578   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.752781   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.757282   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.945606   38829 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 18:35:55.945674   38829 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 18:35:55.945816   38829 ssh_runner.go:195] Run: systemctl --version
	I1213 18:35:55.951961   38829 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 18:35:55.951999   38829 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 18:35:55.952322   38829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 18:35:55.992229   38829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 18:35:56.001527   38829 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 18:35:56.001762   38829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 18:35:56.001849   38829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 18:35:56.014010   38829 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 18:35:56.014037   38829 start.go:496] detecting cgroup driver to use...
	I1213 18:35:56.014094   38829 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 18:35:56.014182   38829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 18:35:56.030879   38829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 18:35:56.046797   38829 docker.go:218] disabling cri-docker service (if available) ...
	I1213 18:35:56.046882   38829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 18:35:56.067384   38829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 18:35:56.080815   38829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 18:35:56.192099   38829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 18:35:56.317541   38829 docker.go:234] disabling docker service ...
	I1213 18:35:56.317693   38829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 18:35:56.332696   38829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 18:35:56.345912   38829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 18:35:56.463560   38829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 18:35:56.579100   38829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 18:35:56.592582   38829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 18:35:56.605285   38829 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 18:35:56.606432   38829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 18:35:56.606495   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.615251   38829 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 18:35:56.615329   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.624699   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.633587   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.642744   38829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 18:35:56.651128   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.660108   38829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.669661   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.678839   38829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 18:35:56.685773   38829 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 18:35:56.686744   38829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 18:35:56.694432   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:56.830483   38829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 18:35:57.005048   38829 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 18:35:57.005450   38829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 18:35:57.010285   38829 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 18:35:57.010309   38829 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 18:35:57.010316   38829 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1213 18:35:57.010333   38829 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 18:35:57.010338   38829 command_runner.go:130] > Access: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010348   38829 command_runner.go:130] > Modify: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010355   38829 command_runner.go:130] > Change: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010364   38829 command_runner.go:130] >  Birth: -
	I1213 18:35:57.010406   38829 start.go:564] Will wait 60s for crictl version
	I1213 18:35:57.010459   38829 ssh_runner.go:195] Run: which crictl
	I1213 18:35:57.014231   38829 command_runner.go:130] > /usr/local/bin/crictl
	I1213 18:35:57.014339   38829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 18:35:57.039763   38829 command_runner.go:130] > Version:  0.1.0
	I1213 18:35:57.039785   38829 command_runner.go:130] > RuntimeName:  cri-o
	I1213 18:35:57.039789   38829 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 18:35:57.039795   38829 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 18:35:57.039807   38829 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 18:35:57.039886   38829 ssh_runner.go:195] Run: crio --version
	I1213 18:35:57.067200   38829 command_runner.go:130] > crio version 1.34.3
	I1213 18:35:57.067289   38829 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 18:35:57.067311   38829 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 18:35:57.067352   38829 command_runner.go:130] >    GitTreeState:   dirty
	I1213 18:35:57.067376   38829 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 18:35:57.067397   38829 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 18:35:57.067430   38829 command_runner.go:130] >    Compiler:       gc
	I1213 18:35:57.067455   38829 command_runner.go:130] >    Platform:       linux/arm64
	I1213 18:35:57.067476   38829 command_runner.go:130] >    Linkmode:       static
	I1213 18:35:57.067513   38829 command_runner.go:130] >    BuildTags:
	I1213 18:35:57.067537   38829 command_runner.go:130] >      static
	I1213 18:35:57.067557   38829 command_runner.go:130] >      netgo
	I1213 18:35:57.067592   38829 command_runner.go:130] >      osusergo
	I1213 18:35:57.067614   38829 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 18:35:57.067632   38829 command_runner.go:130] >      seccomp
	I1213 18:35:57.067651   38829 command_runner.go:130] >      apparmor
	I1213 18:35:57.067685   38829 command_runner.go:130] >      selinux
	I1213 18:35:57.067706   38829 command_runner.go:130] >    LDFlags:          unknown
	I1213 18:35:57.067726   38829 command_runner.go:130] >    SeccompEnabled:   true
	I1213 18:35:57.067760   38829 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 18:35:57.069374   38829 ssh_runner.go:195] Run: crio --version
	I1213 18:35:57.097856   38829 command_runner.go:130] > crio version 1.34.3
	I1213 18:35:57.097937   38829 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 18:35:57.097971   38829 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 18:35:57.098005   38829 command_runner.go:130] >    GitTreeState:   dirty
	I1213 18:35:57.098025   38829 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 18:35:57.098058   38829 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 18:35:57.098082   38829 command_runner.go:130] >    Compiler:       gc
	I1213 18:35:57.098103   38829 command_runner.go:130] >    Platform:       linux/arm64
	I1213 18:35:57.098156   38829 command_runner.go:130] >    Linkmode:       static
	I1213 18:35:57.098180   38829 command_runner.go:130] >    BuildTags:
	I1213 18:35:57.098200   38829 command_runner.go:130] >      static
	I1213 18:35:57.098234   38829 command_runner.go:130] >      netgo
	I1213 18:35:57.098253   38829 command_runner.go:130] >      osusergo
	I1213 18:35:57.098277   38829 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 18:35:57.098306   38829 command_runner.go:130] >      seccomp
	I1213 18:35:57.098328   38829 command_runner.go:130] >      apparmor
	I1213 18:35:57.098348   38829 command_runner.go:130] >      selinux
	I1213 18:35:57.098384   38829 command_runner.go:130] >    LDFlags:          unknown
	I1213 18:35:57.098407   38829 command_runner.go:130] >    SeccompEnabled:   true
	I1213 18:35:57.098425   38829 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 18:35:57.103998   38829 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 18:35:57.106795   38829 cli_runner.go:164] Run: docker network inspect functional-752103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:35:57.122531   38829 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 18:35:57.126557   38829 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 18:35:57.126659   38829 kubeadm.go:884] updating cluster {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 18:35:57.126789   38829 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:35:57.126855   38829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:35:57.159258   38829 command_runner.go:130] > {
	I1213 18:35:57.159281   38829 command_runner.go:130] >   "images":  [
	I1213 18:35:57.159286   38829 command_runner.go:130] >     {
	I1213 18:35:57.159295   38829 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 18:35:57.159299   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159305   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 18:35:57.159309   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159312   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159321   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 18:35:57.159333   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 18:35:57.159349   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159354   38829 command_runner.go:130] >       "size":  "111333938",
	I1213 18:35:57.159358   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159370   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159373   38829 command_runner.go:130] >     },
	I1213 18:35:57.159376   38829 command_runner.go:130] >     {
	I1213 18:35:57.159382   38829 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 18:35:57.159389   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159394   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 18:35:57.159398   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159402   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159410   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 18:35:57.159421   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 18:35:57.159425   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159429   38829 command_runner.go:130] >       "size":  "29037500",
	I1213 18:35:57.159435   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159443   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159450   38829 command_runner.go:130] >     },
	I1213 18:35:57.159453   38829 command_runner.go:130] >     {
	I1213 18:35:57.159459   38829 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 18:35:57.159466   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159471   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 18:35:57.159474   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159481   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159489   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 18:35:57.159500   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 18:35:57.159504   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159508   38829 command_runner.go:130] >       "size":  "74491780",
	I1213 18:35:57.159514   38829 command_runner.go:130] >       "username":  "nonroot",
	I1213 18:35:57.159519   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159526   38829 command_runner.go:130] >     },
	I1213 18:35:57.159529   38829 command_runner.go:130] >     {
	I1213 18:35:57.159536   38829 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 18:35:57.159548   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159554   38829 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 18:35:57.159560   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159564   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159572   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 18:35:57.159582   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 18:35:57.159586   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159596   38829 command_runner.go:130] >       "size":  "60857170",
	I1213 18:35:57.159600   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159604   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159607   38829 command_runner.go:130] >       },
	I1213 18:35:57.159618   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159626   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159629   38829 command_runner.go:130] >     },
	I1213 18:35:57.159633   38829 command_runner.go:130] >     {
	I1213 18:35:57.159646   38829 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 18:35:57.159650   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159655   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 18:35:57.159661   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159665   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159673   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 18:35:57.159684   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 18:35:57.159687   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159691   38829 command_runner.go:130] >       "size":  "84949999",
	I1213 18:35:57.159697   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159701   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159706   38829 command_runner.go:130] >       },
	I1213 18:35:57.159710   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159720   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159723   38829 command_runner.go:130] >     },
	I1213 18:35:57.159726   38829 command_runner.go:130] >     {
	I1213 18:35:57.159733   38829 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 18:35:57.159740   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159750   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 18:35:57.159756   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159762   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159771   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 18:35:57.159782   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 18:35:57.159786   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159790   38829 command_runner.go:130] >       "size":  "72170325",
	I1213 18:35:57.159794   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159800   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159804   38829 command_runner.go:130] >       },
	I1213 18:35:57.159810   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159814   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159820   38829 command_runner.go:130] >     },
	I1213 18:35:57.159823   38829 command_runner.go:130] >     {
	I1213 18:35:57.159829   38829 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 18:35:57.159836   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159841   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 18:35:57.159847   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159851   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159859   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 18:35:57.159870   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 18:35:57.159874   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159878   38829 command_runner.go:130] >       "size":  "74106775",
	I1213 18:35:57.159882   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159888   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159892   38829 command_runner.go:130] >     },
	I1213 18:35:57.159897   38829 command_runner.go:130] >     {
	I1213 18:35:57.159904   38829 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 18:35:57.159910   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159916   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 18:35:57.159926   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159934   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159942   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 18:35:57.159966   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 18:35:57.159973   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159977   38829 command_runner.go:130] >       "size":  "49822549",
	I1213 18:35:57.159981   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159985   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159991   38829 command_runner.go:130] >       },
	I1213 18:35:57.159995   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.160003   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.160008   38829 command_runner.go:130] >     },
	I1213 18:35:57.160011   38829 command_runner.go:130] >     {
	I1213 18:35:57.160017   38829 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 18:35:57.160025   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.160030   38829 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.160033   38829 command_runner.go:130] >       ],
	I1213 18:35:57.160040   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.160048   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 18:35:57.160059   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 18:35:57.160063   38829 command_runner.go:130] >       ],
	I1213 18:35:57.160067   38829 command_runner.go:130] >       "size":  "519884",
	I1213 18:35:57.160070   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.160077   38829 command_runner.go:130] >         "value":  "65535"
	I1213 18:35:57.160080   38829 command_runner.go:130] >       },
	I1213 18:35:57.160084   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.160093   38829 command_runner.go:130] >       "pinned":  true
	I1213 18:35:57.160096   38829 command_runner.go:130] >     }
	I1213 18:35:57.160101   38829 command_runner.go:130] >   ]
	I1213 18:35:57.160112   38829 command_runner.go:130] > }
	I1213 18:35:57.162388   38829 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:35:57.162414   38829 crio.go:433] Images already preloaded, skipping extraction
	I1213 18:35:57.162470   38829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:35:57.186777   38829 command_runner.go:130] > {
	I1213 18:35:57.186796   38829 command_runner.go:130] >   "images":  [
	I1213 18:35:57.186801   38829 command_runner.go:130] >     {
	I1213 18:35:57.186817   38829 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 18:35:57.186822   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186828   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 18:35:57.186832   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186836   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186846   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 18:35:57.186854   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 18:35:57.186857   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186861   38829 command_runner.go:130] >       "size":  "111333938",
	I1213 18:35:57.186865   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.186873   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.186877   38829 command_runner.go:130] >     },
	I1213 18:35:57.186880   38829 command_runner.go:130] >     {
	I1213 18:35:57.186886   38829 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 18:35:57.186890   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186895   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 18:35:57.186898   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186902   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186913   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 18:35:57.186921   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 18:35:57.186928   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186933   38829 command_runner.go:130] >       "size":  "29037500",
	I1213 18:35:57.186936   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.186942   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.186945   38829 command_runner.go:130] >     },
	I1213 18:35:57.186948   38829 command_runner.go:130] >     {
	I1213 18:35:57.186954   38829 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 18:35:57.186958   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186963   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 18:35:57.186966   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186970   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186977   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 18:35:57.186985   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 18:35:57.186992   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186996   38829 command_runner.go:130] >       "size":  "74491780",
	I1213 18:35:57.187000   38829 command_runner.go:130] >       "username":  "nonroot",
	I1213 18:35:57.187004   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187007   38829 command_runner.go:130] >     },
	I1213 18:35:57.187009   38829 command_runner.go:130] >     {
	I1213 18:35:57.187016   38829 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 18:35:57.187020   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187024   38829 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 18:35:57.187029   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187033   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187041   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 18:35:57.187050   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 18:35:57.187053   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187057   38829 command_runner.go:130] >       "size":  "60857170",
	I1213 18:35:57.187061   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187064   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187067   38829 command_runner.go:130] >       },
	I1213 18:35:57.187075   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187079   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187082   38829 command_runner.go:130] >     },
	I1213 18:35:57.187085   38829 command_runner.go:130] >     {
	I1213 18:35:57.187092   38829 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 18:35:57.187095   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187101   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 18:35:57.187104   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187108   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187115   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 18:35:57.187123   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 18:35:57.187126   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187130   38829 command_runner.go:130] >       "size":  "84949999",
	I1213 18:35:57.187134   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187137   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187146   38829 command_runner.go:130] >       },
	I1213 18:35:57.187149   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187153   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187157   38829 command_runner.go:130] >     },
	I1213 18:35:57.187159   38829 command_runner.go:130] >     {
	I1213 18:35:57.187166   38829 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 18:35:57.187170   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187175   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 18:35:57.187178   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187182   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187190   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 18:35:57.187199   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 18:35:57.187202   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187206   38829 command_runner.go:130] >       "size":  "72170325",
	I1213 18:35:57.187209   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187213   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187216   38829 command_runner.go:130] >       },
	I1213 18:35:57.187219   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187223   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187226   38829 command_runner.go:130] >     },
	I1213 18:35:57.187229   38829 command_runner.go:130] >     {
	I1213 18:35:57.187236   38829 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 18:35:57.187239   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187244   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 18:35:57.187247   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187251   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187258   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 18:35:57.187266   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 18:35:57.187269   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187273   38829 command_runner.go:130] >       "size":  "74106775",
	I1213 18:35:57.187277   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187280   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187283   38829 command_runner.go:130] >     },
	I1213 18:35:57.187291   38829 command_runner.go:130] >     {
	I1213 18:35:57.187297   38829 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 18:35:57.187300   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187306   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 18:35:57.187309   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187313   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187321   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 18:35:57.187337   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 18:35:57.187340   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187344   38829 command_runner.go:130] >       "size":  "49822549",
	I1213 18:35:57.187348   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187352   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187355   38829 command_runner.go:130] >       },
	I1213 18:35:57.187358   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187362   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187364   38829 command_runner.go:130] >     },
	I1213 18:35:57.187367   38829 command_runner.go:130] >     {
	I1213 18:35:57.187374   38829 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 18:35:57.187378   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187382   38829 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.187385   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187389   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187396   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 18:35:57.187404   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 18:35:57.187407   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187410   38829 command_runner.go:130] >       "size":  "519884",
	I1213 18:35:57.187414   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187417   38829 command_runner.go:130] >         "value":  "65535"
	I1213 18:35:57.187420   38829 command_runner.go:130] >       },
	I1213 18:35:57.187424   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187428   38829 command_runner.go:130] >       "pinned":  true
	I1213 18:35:57.187431   38829 command_runner.go:130] >     }
	I1213 18:35:57.187434   38829 command_runner.go:130] >   ]
	I1213 18:35:57.187440   38829 command_runner.go:130] > }
	I1213 18:35:57.187570   38829 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:35:57.187578   38829 cache_images.go:86] Images are preloaded, skipping loading
	I1213 18:35:57.187585   38829 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 18:35:57.187672   38829 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-752103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 18:35:57.187756   38829 ssh_runner.go:195] Run: crio config
	I1213 18:35:57.235276   38829 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 18:35:57.235304   38829 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 18:35:57.235312   38829 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 18:35:57.235316   38829 command_runner.go:130] > #
	I1213 18:35:57.235323   38829 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 18:35:57.235330   38829 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 18:35:57.235336   38829 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 18:35:57.235344   38829 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 18:35:57.235351   38829 command_runner.go:130] > # reload'.
	I1213 18:35:57.235358   38829 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 18:35:57.235367   38829 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 18:35:57.235374   38829 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 18:35:57.235386   38829 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 18:35:57.235390   38829 command_runner.go:130] > [crio]
	I1213 18:35:57.235397   38829 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 18:35:57.235406   38829 command_runner.go:130] > # containers images, in this directory.
	I1213 18:35:57.235421   38829 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 18:35:57.235432   38829 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 18:35:57.235437   38829 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 18:35:57.235445   38829 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 18:35:57.235452   38829 command_runner.go:130] > # imagestore = ""
	I1213 18:35:57.235458   38829 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 18:35:57.235468   38829 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 18:35:57.235475   38829 command_runner.go:130] > # storage_driver = "overlay"
	I1213 18:35:57.235481   38829 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 18:35:57.235491   38829 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 18:35:57.235495   38829 command_runner.go:130] > # storage_option = [
	I1213 18:35:57.235502   38829 command_runner.go:130] > # ]
	I1213 18:35:57.235511   38829 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 18:35:57.235518   38829 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 18:35:57.235533   38829 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 18:35:57.235539   38829 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 18:35:57.235547   38829 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 18:35:57.235554   38829 command_runner.go:130] > # always happen on a node reboot
	I1213 18:35:57.235660   38829 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 18:35:57.235692   38829 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 18:35:57.235700   38829 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 18:35:57.235705   38829 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 18:35:57.235710   38829 command_runner.go:130] > # version_file_persist = ""
	I1213 18:35:57.235718   38829 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 18:35:57.235727   38829 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 18:35:57.235730   38829 command_runner.go:130] > # internal_wipe = true
	I1213 18:35:57.235739   38829 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 18:35:57.235744   38829 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 18:35:57.235748   38829 command_runner.go:130] > # internal_repair = true
	I1213 18:35:57.235754   38829 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 18:35:57.235760   38829 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 18:35:57.235769   38829 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 18:35:57.235775   38829 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 18:35:57.235781   38829 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 18:35:57.235784   38829 command_runner.go:130] > [crio.api]
	I1213 18:35:57.235790   38829 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 18:35:57.235795   38829 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 18:35:57.235800   38829 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 18:35:57.235804   38829 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 18:35:57.235811   38829 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 18:35:57.235816   38829 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 18:35:57.235819   38829 command_runner.go:130] > # stream_port = "0"
	I1213 18:35:57.235824   38829 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 18:35:57.235828   38829 command_runner.go:130] > # stream_enable_tls = false
	I1213 18:35:57.235838   38829 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 18:35:57.235842   38829 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 18:35:57.235849   38829 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 18:35:57.235854   38829 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 18:35:57.235858   38829 command_runner.go:130] > # stream_tls_cert = ""
	I1213 18:35:57.235864   38829 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 18:35:57.235869   38829 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 18:35:57.235873   38829 command_runner.go:130] > # stream_tls_key = ""
	I1213 18:35:57.235880   38829 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 18:35:57.235886   38829 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 18:35:57.235892   38829 command_runner.go:130] > # automatically pick up the changes.
	I1213 18:35:57.235896   38829 command_runner.go:130] > # stream_tls_ca = ""
	I1213 18:35:57.235914   38829 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 18:35:57.235918   38829 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 18:35:57.235926   38829 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 18:35:57.235930   38829 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 18:35:57.235936   38829 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 18:35:57.235942   38829 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 18:35:57.235945   38829 command_runner.go:130] > [crio.runtime]
	I1213 18:35:57.235951   38829 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 18:35:57.235956   38829 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 18:35:57.235960   38829 command_runner.go:130] > # "nofile=1024:2048"
	I1213 18:35:57.235965   38829 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 18:35:57.235969   38829 command_runner.go:130] > # default_ulimits = [
	I1213 18:35:57.235972   38829 command_runner.go:130] > # ]
	I1213 18:35:57.235978   38829 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 18:35:57.236231   38829 command_runner.go:130] > # no_pivot = false
	I1213 18:35:57.236246   38829 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 18:35:57.236252   38829 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 18:35:57.236258   38829 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 18:35:57.236264   38829 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 18:35:57.236272   38829 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 18:35:57.236280   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 18:35:57.236292   38829 command_runner.go:130] > # conmon = ""
	I1213 18:35:57.236297   38829 command_runner.go:130] > # Cgroup setting for conmon
	I1213 18:35:57.236304   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 18:35:57.236308   38829 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 18:35:57.236314   38829 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 18:35:57.236320   38829 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 18:35:57.236335   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 18:35:57.236339   38829 command_runner.go:130] > # conmon_env = [
	I1213 18:35:57.236342   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236348   38829 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 18:35:57.236353   38829 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 18:35:57.236358   38829 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 18:35:57.236362   38829 command_runner.go:130] > # default_env = [
	I1213 18:35:57.236365   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236370   38829 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 18:35:57.236378   38829 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 18:35:57.236386   38829 command_runner.go:130] > # selinux = false
	I1213 18:35:57.236397   38829 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 18:35:57.236405   38829 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 18:35:57.236415   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236419   38829 command_runner.go:130] > # seccomp_profile = ""
	I1213 18:35:57.236425   38829 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 18:35:57.236436   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236440   38829 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 18:35:57.236447   38829 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 18:35:57.236457   38829 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 18:35:57.236464   38829 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 18:35:57.236470   38829 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 18:35:57.236477   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236482   38829 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 18:35:57.236493   38829 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 18:35:57.236497   38829 command_runner.go:130] > # the cgroup blockio controller.
	I1213 18:35:57.236501   38829 command_runner.go:130] > # blockio_config_file = ""
	I1213 18:35:57.236512   38829 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 18:35:57.236519   38829 command_runner.go:130] > # blockio parameters.
	I1213 18:35:57.236524   38829 command_runner.go:130] > # blockio_reload = false
	I1213 18:35:57.236530   38829 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 18:35:57.236538   38829 command_runner.go:130] > # irqbalance daemon.
	I1213 18:35:57.236543   38829 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 18:35:57.236550   38829 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 18:35:57.236560   38829 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 18:35:57.236567   38829 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 18:35:57.236573   38829 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 18:35:57.236579   38829 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 18:35:57.236584   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236589   38829 command_runner.go:130] > # rdt_config_file = ""
	I1213 18:35:57.236594   38829 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 18:35:57.236600   38829 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 18:35:57.236606   38829 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 18:35:57.236612   38829 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 18:35:57.236619   38829 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 18:35:57.236626   38829 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 18:35:57.236633   38829 command_runner.go:130] > # will be added.
	I1213 18:35:57.236637   38829 command_runner.go:130] > # default_capabilities = [
	I1213 18:35:57.236640   38829 command_runner.go:130] > # 	"CHOWN",
	I1213 18:35:57.236644   38829 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 18:35:57.236647   38829 command_runner.go:130] > # 	"FSETID",
	I1213 18:35:57.236650   38829 command_runner.go:130] > # 	"FOWNER",
	I1213 18:35:57.236653   38829 command_runner.go:130] > # 	"SETGID",
	I1213 18:35:57.236656   38829 command_runner.go:130] > # 	"SETUID",
	I1213 18:35:57.236674   38829 command_runner.go:130] > # 	"SETPCAP",
	I1213 18:35:57.236679   38829 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 18:35:57.236682   38829 command_runner.go:130] > # 	"KILL",
	I1213 18:35:57.236685   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236693   38829 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 18:35:57.236702   38829 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 18:35:57.236710   38829 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 18:35:57.236716   38829 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 18:35:57.236722   38829 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 18:35:57.236726   38829 command_runner.go:130] > default_sysctls = [
	I1213 18:35:57.236731   38829 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 18:35:57.236734   38829 command_runner.go:130] > ]
	I1213 18:35:57.236738   38829 command_runner.go:130] > # List of devices on the host that a
	I1213 18:35:57.236748   38829 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 18:35:57.236755   38829 command_runner.go:130] > # allowed_devices = [
	I1213 18:35:57.236758   38829 command_runner.go:130] > # 	"/dev/fuse",
	I1213 18:35:57.236762   38829 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 18:35:57.236772   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236777   38829 command_runner.go:130] > # List of additional devices. specified as
	I1213 18:35:57.236784   38829 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 18:35:57.236794   38829 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 18:35:57.236800   38829 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 18:35:57.236804   38829 command_runner.go:130] > # additional_devices = [
	I1213 18:35:57.236832   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236837   38829 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 18:35:57.236841   38829 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 18:35:57.236844   38829 command_runner.go:130] > # 	"/etc/cdi",
	I1213 18:35:57.236848   38829 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 18:35:57.236854   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236861   38829 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 18:35:57.236870   38829 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 18:35:57.236874   38829 command_runner.go:130] > # Defaults to false.
	I1213 18:35:57.236880   38829 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 18:35:57.236891   38829 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 18:35:57.236898   38829 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 18:35:57.236901   38829 command_runner.go:130] > # hooks_dir = [
	I1213 18:35:57.236908   38829 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 18:35:57.236915   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236921   38829 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 18:35:57.236931   38829 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 18:35:57.236939   38829 command_runner.go:130] > # its default mounts from the following two files:
	I1213 18:35:57.236942   38829 command_runner.go:130] > #
	I1213 18:35:57.236949   38829 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 18:35:57.236959   38829 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 18:35:57.236964   38829 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 18:35:57.236967   38829 command_runner.go:130] > #
	I1213 18:35:57.236974   38829 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 18:35:57.236984   38829 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 18:35:57.236990   38829 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 18:35:57.236996   38829 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 18:35:57.237024   38829 command_runner.go:130] > #
	I1213 18:35:57.237029   38829 command_runner.go:130] > # default_mounts_file = ""
	I1213 18:35:57.237035   38829 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 18:35:57.237044   38829 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 18:35:57.237052   38829 command_runner.go:130] > # pids_limit = -1
	I1213 18:35:57.237058   38829 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 18:35:57.237065   38829 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 18:35:57.237075   38829 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 18:35:57.237084   38829 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 18:35:57.237092   38829 command_runner.go:130] > # log_size_max = -1
	I1213 18:35:57.237099   38829 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 18:35:57.237104   38829 command_runner.go:130] > # log_to_journald = false
	I1213 18:35:57.237114   38829 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 18:35:57.237119   38829 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 18:35:57.237125   38829 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 18:35:57.237130   38829 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 18:35:57.237137   38829 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 18:35:57.237145   38829 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 18:35:57.237151   38829 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 18:35:57.237155   38829 command_runner.go:130] > # read_only = false
	I1213 18:35:57.237162   38829 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 18:35:57.237173   38829 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 18:35:57.237181   38829 command_runner.go:130] > # live configuration reload.
	I1213 18:35:57.237191   38829 command_runner.go:130] > # log_level = "info"
	I1213 18:35:57.237200   38829 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 18:35:57.237212   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.237216   38829 command_runner.go:130] > # log_filter = ""
	I1213 18:35:57.237222   38829 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 18:35:57.237228   38829 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 18:35:57.237237   38829 command_runner.go:130] > # separated by comma.
	I1213 18:35:57.237245   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237249   38829 command_runner.go:130] > # uid_mappings = ""
	I1213 18:35:57.237255   38829 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 18:35:57.237265   38829 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 18:35:57.237269   38829 command_runner.go:130] > # separated by comma.
	I1213 18:35:57.237277   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237284   38829 command_runner.go:130] > # gid_mappings = ""
	I1213 18:35:57.237290   38829 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 18:35:57.237297   38829 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 18:35:57.237311   38829 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 18:35:57.237319   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237323   38829 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 18:35:57.237329   38829 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 18:35:57.237339   38829 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 18:35:57.237345   38829 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 18:35:57.237354   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237949   38829 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 18:35:57.237966   38829 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 18:35:57.237972   38829 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 18:35:57.237979   38829 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 18:35:57.238476   38829 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 18:35:57.238490   38829 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 18:35:57.238497   38829 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 18:35:57.238503   38829 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 18:35:57.238519   38829 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 18:35:57.238932   38829 command_runner.go:130] > # drop_infra_ctr = true
	I1213 18:35:57.238947   38829 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 18:35:57.238955   38829 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 18:35:57.238963   38829 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 18:35:57.239291   38829 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 18:35:57.239306   38829 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 18:35:57.239313   38829 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 18:35:57.239319   38829 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 18:35:57.239324   38829 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 18:35:57.239634   38829 command_runner.go:130] > # shared_cpuset = ""
	I1213 18:35:57.239648   38829 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 18:35:57.239654   38829 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 18:35:57.240060   38829 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 18:35:57.240075   38829 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 18:35:57.240414   38829 command_runner.go:130] > # pinns_path = ""
	I1213 18:35:57.240427   38829 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 18:35:57.240434   38829 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 18:35:57.240846   38829 command_runner.go:130] > # enable_criu_support = true
	I1213 18:35:57.240873   38829 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 18:35:57.240881   38829 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 18:35:57.241322   38829 command_runner.go:130] > # enable_pod_events = false
	I1213 18:35:57.241336   38829 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 18:35:57.241342   38829 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 18:35:57.241756   38829 command_runner.go:130] > # default_runtime = "crun"
	I1213 18:35:57.241768   38829 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 18:35:57.241777   38829 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 18:35:57.241786   38829 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 18:35:57.241791   38829 command_runner.go:130] > # creation as a file is not desired either.
	I1213 18:35:57.241800   38829 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 18:35:57.241820   38829 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 18:35:57.242010   38829 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 18:35:57.242355   38829 command_runner.go:130] > # ]
	I1213 18:35:57.242370   38829 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 18:35:57.242386   38829 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 18:35:57.242394   38829 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 18:35:57.242400   38829 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 18:35:57.242406   38829 command_runner.go:130] > #
	I1213 18:35:57.242412   38829 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 18:35:57.242419   38829 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 18:35:57.242423   38829 command_runner.go:130] > # runtime_type = "oci"
	I1213 18:35:57.242427   38829 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 18:35:57.242434   38829 command_runner.go:130] > # inherit_default_runtime = false
	I1213 18:35:57.242441   38829 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 18:35:57.242445   38829 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 18:35:57.242449   38829 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 18:35:57.242460   38829 command_runner.go:130] > # monitor_env = []
	I1213 18:35:57.242465   38829 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 18:35:57.242470   38829 command_runner.go:130] > # allowed_annotations = []
	I1213 18:35:57.242487   38829 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 18:35:57.242491   38829 command_runner.go:130] > # no_sync_log = false
	I1213 18:35:57.242496   38829 command_runner.go:130] > # default_annotations = {}
	I1213 18:35:57.242500   38829 command_runner.go:130] > # stream_websockets = false
	I1213 18:35:57.242507   38829 command_runner.go:130] > # seccomp_profile = ""
	I1213 18:35:57.242553   38829 command_runner.go:130] > # Where:
	I1213 18:35:57.242564   38829 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 18:35:57.242570   38829 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 18:35:57.242577   38829 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 18:35:57.242583   38829 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 18:35:57.242587   38829 command_runner.go:130] > #   in $PATH.
	I1213 18:35:57.242593   38829 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 18:35:57.242598   38829 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 18:35:57.242614   38829 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 18:35:57.242620   38829 command_runner.go:130] > #   state.
	I1213 18:35:57.242626   38829 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 18:35:57.242633   38829 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 18:35:57.242641   38829 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 18:35:57.242647   38829 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 18:35:57.242652   38829 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 18:35:57.242659   38829 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 18:35:57.242665   38829 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 18:35:57.242671   38829 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 18:35:57.242684   38829 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 18:35:57.242694   38829 command_runner.go:130] > #   The currently recognized values are:
	I1213 18:35:57.242701   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 18:35:57.242709   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 18:35:57.242718   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 18:35:57.242724   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 18:35:57.242736   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 18:35:57.242745   38829 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 18:35:57.242761   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 18:35:57.242774   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 18:35:57.242781   38829 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 18:35:57.242788   38829 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 18:35:57.242795   38829 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 18:35:57.242802   38829 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 18:35:57.242813   38829 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 18:35:57.242824   38829 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 18:35:57.242842   38829 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 18:35:57.242850   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 18:35:57.242861   38829 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 18:35:57.242865   38829 command_runner.go:130] > #   deprecated option "conmon".
	I1213 18:35:57.242873   38829 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 18:35:57.242881   38829 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 18:35:57.242888   38829 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 18:35:57.242894   38829 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 18:35:57.242911   38829 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 18:35:57.242917   38829 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 18:35:57.242924   38829 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 18:35:57.242933   38829 command_runner.go:130] > #   conmon-rs by using:
	I1213 18:35:57.242941   38829 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 18:35:57.242954   38829 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 18:35:57.242962   38829 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 18:35:57.242973   38829 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 18:35:57.242978   38829 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 18:35:57.242995   38829 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 18:35:57.243003   38829 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 18:35:57.243008   38829 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 18:35:57.243017   38829 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 18:35:57.243027   38829 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 18:35:57.243033   38829 command_runner.go:130] > #   when a machine crash happens.
	I1213 18:35:57.243040   38829 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 18:35:57.243049   38829 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 18:35:57.243065   38829 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 18:35:57.243070   38829 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 18:35:57.243076   38829 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 18:35:57.243084   38829 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 18:35:57.243094   38829 command_runner.go:130] > #
	I1213 18:35:57.243099   38829 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 18:35:57.243102   38829 command_runner.go:130] > #
	I1213 18:35:57.243113   38829 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 18:35:57.243123   38829 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 18:35:57.243126   38829 command_runner.go:130] > #
	I1213 18:35:57.243139   38829 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 18:35:57.243153   38829 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 18:35:57.243157   38829 command_runner.go:130] > #
	I1213 18:35:57.243163   38829 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 18:35:57.243170   38829 command_runner.go:130] > # feature.
	I1213 18:35:57.243173   38829 command_runner.go:130] > #
	I1213 18:35:57.243179   38829 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 18:35:57.243186   38829 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 18:35:57.243196   38829 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 18:35:57.243208   38829 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 18:35:57.243219   38829 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 18:35:57.243222   38829 command_runner.go:130] > #
	I1213 18:35:57.243229   38829 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 18:35:57.243235   38829 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 18:35:57.243256   38829 command_runner.go:130] > #
	I1213 18:35:57.243267   38829 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 18:35:57.243274   38829 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 18:35:57.243283   38829 command_runner.go:130] > #
	I1213 18:35:57.243294   38829 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 18:35:57.243301   38829 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 18:35:57.243304   38829 command_runner.go:130] > # limitation.
	I1213 18:35:57.243341   38829 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 18:35:57.243623   38829 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 18:35:57.243757   38829 command_runner.go:130] > runtime_type = ""
	I1213 18:35:57.244003   38829 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 18:35:57.244255   38829 command_runner.go:130] > inherit_default_runtime = false
	I1213 18:35:57.244399   38829 command_runner.go:130] > runtime_config_path = ""
	I1213 18:35:57.244539   38829 command_runner.go:130] > container_min_memory = ""
	I1213 18:35:57.244777   38829 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 18:35:57.245055   38829 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 18:35:57.245214   38829 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 18:35:57.245448   38829 command_runner.go:130] > allowed_annotations = [
	I1213 18:35:57.245605   38829 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 18:35:57.245830   38829 command_runner.go:130] > ]
	I1213 18:35:57.246064   38829 command_runner.go:130] > privileged_without_host_devices = false
	I1213 18:35:57.246554   38829 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 18:35:57.246808   38829 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 18:35:57.246935   38829 command_runner.go:130] > runtime_type = ""
	I1213 18:35:57.247167   38829 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 18:35:57.247404   38829 command_runner.go:130] > inherit_default_runtime = false
	I1213 18:35:57.247591   38829 command_runner.go:130] > runtime_config_path = ""
	I1213 18:35:57.247761   38829 command_runner.go:130] > container_min_memory = ""
	I1213 18:35:57.248046   38829 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 18:35:57.248332   38829 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 18:35:57.248492   38829 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 18:35:57.248957   38829 command_runner.go:130] > privileged_without_host_devices = false
	I1213 18:35:57.249339   38829 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 18:35:57.249353   38829 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 18:35:57.249360   38829 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 18:35:57.249369   38829 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 18:35:57.249380   38829 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 18:35:57.249391   38829 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 18:35:57.249420   38829 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 18:35:57.249432   38829 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 18:35:57.249442   38829 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 18:35:57.249454   38829 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 18:35:57.249460   38829 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 18:35:57.249474   38829 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 18:35:57.249483   38829 command_runner.go:130] > # Example:
	I1213 18:35:57.249488   38829 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 18:35:57.249494   38829 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 18:35:57.249507   38829 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 18:35:57.249513   38829 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 18:35:57.249522   38829 command_runner.go:130] > # cpuset = "0-1"
	I1213 18:35:57.249525   38829 command_runner.go:130] > # cpushares = "5"
	I1213 18:35:57.249529   38829 command_runner.go:130] > # cpuquota = "1000"
	I1213 18:35:57.249533   38829 command_runner.go:130] > # cpuperiod = "100000"
	I1213 18:35:57.249548   38829 command_runner.go:130] > # cpulimit = "35"
	I1213 18:35:57.249556   38829 command_runner.go:130] > # Where:
	I1213 18:35:57.249560   38829 command_runner.go:130] > # The workload name is workload-type.
	I1213 18:35:57.249568   38829 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 18:35:57.249574   38829 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 18:35:57.249585   38829 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 18:35:57.249594   38829 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 18:35:57.249604   38829 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 18:35:57.249739   38829 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 18:35:57.249752   38829 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 18:35:57.249757   38829 command_runner.go:130] > # Default value is set to true
	I1213 18:35:57.250196   38829 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 18:35:57.250210   38829 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 18:35:57.250216   38829 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 18:35:57.250220   38829 command_runner.go:130] > # Default value is set to 'false'
	I1213 18:35:57.250699   38829 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 18:35:57.250712   38829 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 18:35:57.250722   38829 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 18:35:57.251071   38829 command_runner.go:130] > # timezone = ""
	I1213 18:35:57.251082   38829 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 18:35:57.251086   38829 command_runner.go:130] > #
	I1213 18:35:57.251093   38829 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 18:35:57.251100   38829 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 18:35:57.251103   38829 command_runner.go:130] > [crio.image]
	I1213 18:35:57.251109   38829 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 18:35:57.251555   38829 command_runner.go:130] > # default_transport = "docker://"
	I1213 18:35:57.251569   38829 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 18:35:57.251576   38829 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 18:35:57.251964   38829 command_runner.go:130] > # global_auth_file = ""
	I1213 18:35:57.251977   38829 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 18:35:57.251982   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.252443   38829 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.252459   38829 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 18:35:57.252468   38829 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 18:35:57.252474   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.252817   38829 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 18:35:57.252830   38829 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 18:35:57.252837   38829 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 18:35:57.252844   38829 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 18:35:57.252849   38829 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 18:35:57.253309   38829 command_runner.go:130] > # pause_command = "/pause"
	I1213 18:35:57.253323   38829 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 18:35:57.253330   38829 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 18:35:57.253336   38829 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 18:35:57.253342   38829 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 18:35:57.253349   38829 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 18:35:57.253355   38829 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 18:35:57.253590   38829 command_runner.go:130] > # pinned_images = [
	I1213 18:35:57.253600   38829 command_runner.go:130] > # ]
	I1213 18:35:57.253607   38829 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 18:35:57.253614   38829 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 18:35:57.253621   38829 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 18:35:57.253627   38829 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 18:35:57.253636   38829 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 18:35:57.253910   38829 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 18:35:57.253925   38829 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 18:35:57.253939   38829 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 18:35:57.253949   38829 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 18:35:57.253960   38829 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 18:35:57.253967   38829 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 18:35:57.253980   38829 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 18:35:57.253986   38829 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 18:35:57.253995   38829 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 18:35:57.254000   38829 command_runner.go:130] > # changing them here.
	I1213 18:35:57.254012   38829 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 18:35:57.254016   38829 command_runner.go:130] > # insecure_registries = [
	I1213 18:35:57.254268   38829 command_runner.go:130] > # ]
	I1213 18:35:57.254281   38829 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 18:35:57.254287   38829 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 18:35:57.254424   38829 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 18:35:57.254436   38829 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 18:35:57.254580   38829 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 18:35:57.254592   38829 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 18:35:57.254600   38829 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 18:35:57.254897   38829 command_runner.go:130] > # auto_reload_registries = false
	I1213 18:35:57.254910   38829 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 18:35:57.254920   38829 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 18:35:57.254926   38829 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 18:35:57.254930   38829 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 18:35:57.254935   38829 command_runner.go:130] > # The mode of short name resolution.
	I1213 18:35:57.254941   38829 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 18:35:57.254949   38829 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 18:35:57.254965   38829 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 18:35:57.254970   38829 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 18:35:57.254982   38829 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 18:35:57.254988   38829 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 18:35:57.255234   38829 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 18:35:57.255247   38829 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 18:35:57.255251   38829 command_runner.go:130] > # CNI plugins.
	I1213 18:35:57.255254   38829 command_runner.go:130] > [crio.network]
	I1213 18:35:57.255260   38829 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 18:35:57.255266   38829 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 18:35:57.255275   38829 command_runner.go:130] > # cni_default_network = ""
	I1213 18:35:57.255283   38829 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 18:35:57.255416   38829 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 18:35:57.255429   38829 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 18:35:57.255573   38829 command_runner.go:130] > # plugin_dirs = [
	I1213 18:35:57.255807   38829 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 18:35:57.255816   38829 command_runner.go:130] > # ]
	I1213 18:35:57.255821   38829 command_runner.go:130] > # List of included pod metrics.
	I1213 18:35:57.255825   38829 command_runner.go:130] > # included_pod_metrics = [
	I1213 18:35:57.255828   38829 command_runner.go:130] > # ]
	I1213 18:35:57.255834   38829 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 18:35:57.255838   38829 command_runner.go:130] > [crio.metrics]
	I1213 18:35:57.255843   38829 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 18:35:57.255847   38829 command_runner.go:130] > # enable_metrics = false
	I1213 18:35:57.255851   38829 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 18:35:57.255867   38829 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 18:35:57.255879   38829 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 18:35:57.255889   38829 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 18:35:57.255900   38829 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 18:35:57.255905   38829 command_runner.go:130] > # metrics_collectors = [
	I1213 18:35:57.256016   38829 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 18:35:57.256027   38829 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 18:35:57.256031   38829 command_runner.go:130] > # 	"containers_oom_total",
	I1213 18:35:57.256331   38829 command_runner.go:130] > # 	"processes_defunct",
	I1213 18:35:57.256341   38829 command_runner.go:130] > # 	"operations_total",
	I1213 18:35:57.256346   38829 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 18:35:57.256351   38829 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 18:35:57.256361   38829 command_runner.go:130] > # 	"operations_errors_total",
	I1213 18:35:57.256365   38829 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 18:35:57.256370   38829 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 18:35:57.256374   38829 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 18:35:57.257117   38829 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 18:35:57.257132   38829 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 18:35:57.257137   38829 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 18:35:57.257143   38829 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 18:35:57.257155   38829 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 18:35:57.257161   38829 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 18:35:57.257170   38829 command_runner.go:130] > # ]
	I1213 18:35:57.257177   38829 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 18:35:57.257185   38829 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 18:35:57.257191   38829 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 18:35:57.257199   38829 command_runner.go:130] > # metrics_port = 9090
	I1213 18:35:57.257204   38829 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 18:35:57.257212   38829 command_runner.go:130] > # metrics_socket = ""
	I1213 18:35:57.257233   38829 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 18:35:57.257245   38829 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 18:35:57.257252   38829 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 18:35:57.257260   38829 command_runner.go:130] > # certificate on any modification event.
	I1213 18:35:57.257270   38829 command_runner.go:130] > # metrics_cert = ""
	I1213 18:35:57.257276   38829 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 18:35:57.257285   38829 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 18:35:57.257289   38829 command_runner.go:130] > # metrics_key = ""
	I1213 18:35:57.257299   38829 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 18:35:57.257318   38829 command_runner.go:130] > [crio.tracing]
	I1213 18:35:57.257325   38829 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 18:35:57.257329   38829 command_runner.go:130] > # enable_tracing = false
	I1213 18:35:57.257339   38829 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 18:35:57.257343   38829 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 18:35:57.257354   38829 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 18:35:57.257366   38829 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 18:35:57.257381   38829 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 18:35:57.257393   38829 command_runner.go:130] > [crio.nri]
	I1213 18:35:57.257402   38829 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 18:35:57.257406   38829 command_runner.go:130] > # enable_nri = true
	I1213 18:35:57.257410   38829 command_runner.go:130] > # NRI socket to listen on.
	I1213 18:35:57.257415   38829 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 18:35:57.257423   38829 command_runner.go:130] > # NRI plugin directory to use.
	I1213 18:35:57.257428   38829 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 18:35:57.257437   38829 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 18:35:57.257442   38829 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 18:35:57.257457   38829 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 18:35:57.257514   38829 command_runner.go:130] > # nri_disable_connections = false
	I1213 18:35:57.257530   38829 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 18:35:57.257535   38829 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 18:35:57.257544   38829 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 18:35:57.257549   38829 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 18:35:57.257558   38829 command_runner.go:130] > # NRI default validator configuration.
	I1213 18:35:57.257566   38829 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 18:35:57.257576   38829 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 18:35:57.257584   38829 command_runner.go:130] > # can be restricted/rejected:
	I1213 18:35:57.257588   38829 command_runner.go:130] > # - OCI hook injection
	I1213 18:35:57.257597   38829 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 18:35:57.257609   38829 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 18:35:57.257615   38829 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 18:35:57.257624   38829 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 18:35:57.257632   38829 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 18:35:57.257642   38829 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 18:35:57.257652   38829 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 18:35:57.257660   38829 command_runner.go:130] > #
	I1213 18:35:57.257664   38829 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 18:35:57.257672   38829 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 18:35:57.257686   38829 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 18:35:57.257692   38829 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 18:35:57.257699   38829 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 18:35:57.257712   38829 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 18:35:57.257721   38829 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 18:35:57.257726   38829 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 18:35:57.257732   38829 command_runner.go:130] > # ]
	I1213 18:35:57.257738   38829 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 18:35:57.257747   38829 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 18:35:57.257763   38829 command_runner.go:130] > [crio.stats]
	I1213 18:35:57.257772   38829 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 18:35:57.257778   38829 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 18:35:57.257782   38829 command_runner.go:130] > # stats_collection_period = 0
	I1213 18:35:57.257792   38829 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 18:35:57.257800   38829 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 18:35:57.257809   38829 command_runner.go:130] > # collection_period = 0
	I1213 18:35:57.259571   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.21464252Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 18:35:57.259589   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214677794Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 18:35:57.259613   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214706635Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 18:35:57.259625   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.21473084Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 18:35:57.259635   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214801782Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:57.259643   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.215251382Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 18:35:57.259658   38829 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 18:35:57.259749   38829 cni.go:84] Creating CNI manager for ""
	I1213 18:35:57.259765   38829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:35:57.259800   38829 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 18:35:57.259831   38829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-752103 NodeName:functional-752103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 18:35:57.259972   38829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-752103"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 18:35:57.260053   38829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 18:35:57.267743   38829 command_runner.go:130] > kubeadm
	I1213 18:35:57.267764   38829 command_runner.go:130] > kubectl
	I1213 18:35:57.267769   38829 command_runner.go:130] > kubelet
	I1213 18:35:57.268114   38829 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 18:35:57.268211   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 18:35:57.275739   38829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 18:35:57.288967   38829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 18:35:57.301790   38829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 18:35:57.314673   38829 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 18:35:57.318486   38829 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 18:35:57.318580   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:57.437137   38829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:35:57.456752   38829 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103 for IP: 192.168.49.2
	I1213 18:35:57.456776   38829 certs.go:195] generating shared ca certs ...
	I1213 18:35:57.456809   38829 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:57.456950   38829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 18:35:57.457003   38829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 18:35:57.457091   38829 certs.go:257] generating profile certs ...
	I1213 18:35:57.457200   38829 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key
	I1213 18:35:57.457253   38829 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key.597c6026
	I1213 18:35:57.457304   38829 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key
	I1213 18:35:57.457312   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 18:35:57.457324   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 18:35:57.457340   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 18:35:57.457356   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 18:35:57.457367   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 18:35:57.457383   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 18:35:57.457395   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 18:35:57.457405   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 18:35:57.457457   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 18:35:57.457490   38829 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 18:35:57.457499   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 18:35:57.457529   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 18:35:57.457562   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 18:35:57.457593   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 18:35:57.457644   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:35:57.457676   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.457691   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.457705   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.458319   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 18:35:57.479443   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 18:35:57.498974   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 18:35:57.520210   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 18:35:57.540966   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 18:35:57.558774   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 18:35:57.576442   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 18:35:57.593767   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 18:35:57.611061   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 18:35:57.628952   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 18:35:57.646627   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 18:35:57.664290   38829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 18:35:57.677693   38829 ssh_runner.go:195] Run: openssl version
	I1213 18:35:57.683465   38829 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 18:35:57.683918   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.691710   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 18:35:57.699237   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.702943   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.702972   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.703038   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.743436   38829 command_runner.go:130] > 51391683
	I1213 18:35:57.743914   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 18:35:57.751320   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.758498   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 18:35:57.765907   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769321   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769343   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769391   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.809666   38829 command_runner.go:130] > 3ec20f2e
	I1213 18:35:57.810146   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 18:35:57.818335   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.826660   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 18:35:57.834746   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838666   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838764   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838851   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.879619   38829 command_runner.go:130] > b5213941
	I1213 18:35:57.880088   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 18:35:57.887654   38829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:35:57.891412   38829 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:35:57.891437   38829 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 18:35:57.891445   38829 command_runner.go:130] > Device: 259,1	Inode: 1056084     Links: 1
	I1213 18:35:57.891452   38829 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 18:35:57.891459   38829 command_runner.go:130] > Access: 2025-12-13 18:31:50.964784337 +0000
	I1213 18:35:57.891465   38829 command_runner.go:130] > Modify: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891470   38829 command_runner.go:130] > Change: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891475   38829 command_runner.go:130] >  Birth: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891539   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 18:35:57.937033   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:57.937482   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 18:35:57.978137   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:57.978564   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 18:35:58.033951   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.034441   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 18:35:58.075936   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.076412   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 18:35:58.118212   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.118338   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 18:35:58.159347   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.159444   38829 kubeadm.go:401] StartCluster: {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:58.159559   38829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:35:58.159642   38829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:35:58.186428   38829 cri.go:89] found id: ""
	I1213 18:35:58.186502   38829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 18:35:58.193645   38829 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 18:35:58.193670   38829 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 18:35:58.193678   38829 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 18:35:58.194604   38829 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 18:35:58.194674   38829 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 18:35:58.194749   38829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 18:35:58.202237   38829 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:35:58.202735   38829 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-752103" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.202850   38829 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-2686/kubeconfig needs updating (will repair): [kubeconfig missing "functional-752103" cluster setting kubeconfig missing "functional-752103" context setting]
	I1213 18:35:58.203123   38829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.203546   38829 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.203705   38829 kapi.go:59] client config for functional-752103: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 18:35:58.204223   38829 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 18:35:58.204247   38829 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 18:35:58.204258   38829 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 18:35:58.204263   38829 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 18:35:58.204267   38829 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 18:35:58.204300   38829 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 18:35:58.204536   38829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 18:35:58.212005   38829 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 18:35:58.212037   38829 kubeadm.go:602] duration metric: took 17.346627ms to restartPrimaryControlPlane
	I1213 18:35:58.212045   38829 kubeadm.go:403] duration metric: took 52.608163ms to StartCluster
	I1213 18:35:58.212060   38829 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.212116   38829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.212712   38829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.212903   38829 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 18:35:58.213488   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:58.213543   38829 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 18:35:58.213607   38829 addons.go:70] Setting storage-provisioner=true in profile "functional-752103"
	I1213 18:35:58.213620   38829 addons.go:239] Setting addon storage-provisioner=true in "functional-752103"
	I1213 18:35:58.213643   38829 host.go:66] Checking if "functional-752103" exists ...
	I1213 18:35:58.214229   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.214390   38829 addons.go:70] Setting default-storageclass=true in profile "functional-752103"
	I1213 18:35:58.214412   38829 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-752103"
	I1213 18:35:58.214713   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.219256   38829 out.go:179] * Verifying Kubernetes components...
	I1213 18:35:58.222143   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:58.244199   38829 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 18:35:58.247016   38829 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:58.247042   38829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 18:35:58.247112   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:58.257520   38829 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.257687   38829 kapi.go:59] client config for functional-752103: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 18:35:58.257971   38829 addons.go:239] Setting addon default-storageclass=true in "functional-752103"
	I1213 18:35:58.258004   38829 host.go:66] Checking if "functional-752103" exists ...
	I1213 18:35:58.258425   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.277237   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:58.306835   38829 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:58.306855   38829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 18:35:58.306918   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:58.340724   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:58.416694   38829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:35:58.451165   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:58.493354   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.080268   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.080307   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080337   38829 retry.go:31] will retry after 153.209012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080385   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.080398   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080404   38829 retry.go:31] will retry after 291.62792ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080464   38829 node_ready.go:35] waiting up to 6m0s for node "functional-752103" to be "Ready" ...
	I1213 18:35:59.080578   38829 type.go:168] "Request Body" body=""
	I1213 18:35:59.080656   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:35:59.080963   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:35:59.234362   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:59.300149   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.300200   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.300219   38829 retry.go:31] will retry after 511.331502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.372301   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.426538   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.430102   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.430132   38829 retry.go:31] will retry after 426.700032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.581486   38829 type.go:168] "Request Body" body=""
	I1213 18:35:59.581586   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:35:59.581963   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:35:59.812414   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:59.857973   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.893611   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.893688   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.893723   38829 retry.go:31] will retry after 310.068383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.947559   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.947617   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.947640   38829 retry.go:31] will retry after 829.65637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.080795   38829 type.go:168] "Request Body" body=""
	I1213 18:36:00.080875   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:00.081240   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:00.205923   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:00.416702   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:00.416818   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.416873   38829 retry.go:31] will retry after 579.133816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.581369   38829 type.go:168] "Request Body" body=""
	I1213 18:36:00.581557   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:00.582010   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:00.778452   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:00.837536   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:00.837585   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.837604   38829 retry.go:31] will retry after 974.075863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.996954   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:01.059672   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:01.059714   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.059763   38829 retry.go:31] will retry after 1.136000803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.080856   38829 type.go:168] "Request Body" body=""
	I1213 18:36:01.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:01.081261   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:01.081306   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:01.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:36:01.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:01.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:01.812632   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:01.883701   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:01.883803   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.883825   38829 retry.go:31] will retry after 921.808005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.081109   38829 type.go:168] "Request Body" body=""
	I1213 18:36:02.081198   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:02.081477   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:02.196877   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:02.253907   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:02.257605   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.257637   38829 retry.go:31] will retry after 1.546462752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.581141   38829 type.go:168] "Request Body" body=""
	I1213 18:36:02.581286   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:02.581677   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:02.805901   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:02.889297   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:02.893182   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.893216   38829 retry.go:31] will retry after 1.247577285s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:03.081687   38829 type.go:168] "Request Body" body=""
	I1213 18:36:03.081764   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:03.082108   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:03.082162   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:03.580643   38829 type.go:168] "Request Body" body=""
	I1213 18:36:03.580714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:03.580995   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:03.804445   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:03.865304   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:03.865353   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:03.865372   38829 retry.go:31] will retry after 3.450909707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.080758   38829 type.go:168] "Request Body" body=""
	I1213 18:36:04.080837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:04.081202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:04.141517   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:04.204625   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:04.204670   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.204689   38829 retry.go:31] will retry after 3.409599879s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.581166   38829 type.go:168] "Request Body" body=""
	I1213 18:36:04.581250   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:04.581566   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:05.081373   38829 type.go:168] "Request Body" body=""
	I1213 18:36:05.081443   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:05.081739   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:05.581581   38829 type.go:168] "Request Body" body=""
	I1213 18:36:05.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:05.581992   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:05.582049   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:06.080707   38829 type.go:168] "Request Body" body=""
	I1213 18:36:06.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:06.081099   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:06.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:36:06.580849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:06.581220   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:36:07.080806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:07.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.316533   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:07.393411   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:07.397246   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.397278   38829 retry.go:31] will retry after 2.442447522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.581582   38829 type.go:168] "Request Body" body=""
	I1213 18:36:07.581660   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:07.582007   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.615412   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:07.670357   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:07.674453   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.674491   38829 retry.go:31] will retry after 4.254133001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:08.080696   38829 type.go:168] "Request Body" body=""
	I1213 18:36:08.080805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:08.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:08.081221   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:08.581149   38829 type.go:168] "Request Body" body=""
	I1213 18:36:08.581249   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:08.581593   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.081583   38829 type.go:168] "Request Body" body=""
	I1213 18:36:09.081656   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:09.081980   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.581654   38829 type.go:168] "Request Body" body=""
	I1213 18:36:09.581729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:09.582054   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.840484   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:09.900307   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:09.900343   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:09.900361   38829 retry.go:31] will retry after 4.640117862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:10.081715   38829 type.go:168] "Request Body" body=""
	I1213 18:36:10.081794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:10.082116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:10.082183   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:10.580872   38829 type.go:168] "Request Body" body=""
	I1213 18:36:10.580959   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:10.581373   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.080692   38829 type.go:168] "Request Body" body=""
	I1213 18:36:11.080776   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:11.081115   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.580824   38829 type.go:168] "Request Body" body=""
	I1213 18:36:11.580896   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:11.581249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.928812   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:11.987432   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:11.987481   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:11.987500   38829 retry.go:31] will retry after 8.251825899s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:12.081733   38829 type.go:168] "Request Body" body=""
	I1213 18:36:12.081819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:12.082391   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:12.082470   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:12.580663   38829 type.go:168] "Request Body" body=""
	I1213 18:36:12.580742   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:12.581100   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:13.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:36:13.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:13.081119   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:13.580828   38829 type.go:168] "Request Body" body=""
	I1213 18:36:13.580900   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:13.581257   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:14.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:36:14.081075   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:14.081364   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:14.540746   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:14.581321   38829 type.go:168] "Request Body" body=""
	I1213 18:36:14.581395   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:14.581672   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:14.581722   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:14.600534   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:14.600587   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:14.600605   38829 retry.go:31] will retry after 8.957681085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:15.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:36:15.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:15.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:15.580789   38829 type.go:168] "Request Body" body=""
	I1213 18:36:15.580868   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:15.581235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:16.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:36:16.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:16.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:16.580886   38829 type.go:168] "Request Body" body=""
	I1213 18:36:16.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:16.581330   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:17.081614   38829 type.go:168] "Request Body" body=""
	I1213 18:36:17.081684   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:17.081955   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:17.081995   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:17.580662   38829 type.go:168] "Request Body" body=""
	I1213 18:36:17.580732   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:17.581063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:18.080650   38829 type.go:168] "Request Body" body=""
	I1213 18:36:18.080721   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:18.081108   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:18.580672   38829 type.go:168] "Request Body" body=""
	I1213 18:36:18.580742   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:18.581079   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:19.081047   38829 type.go:168] "Request Body" body=""
	I1213 18:36:19.081115   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:19.081424   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:19.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:36:19.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:19.581191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:19.581284   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:20.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:36:20.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:20.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:20.239601   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:20.301361   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:20.301401   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:20.301420   38829 retry.go:31] will retry after 6.59814029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:20.580747   38829 type.go:168] "Request Body" body=""
	I1213 18:36:20.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:20.581125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:21.080844   38829 type.go:168] "Request Body" body=""
	I1213 18:36:21.080933   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:21.081353   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:21.580686   38829 type.go:168] "Request Body" body=""
	I1213 18:36:21.580762   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:21.581080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:22.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:36:22.080884   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:22.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:22.081274   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:22.580705   38829 type.go:168] "Request Body" body=""
	I1213 18:36:22.580799   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:22.581136   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.080675   38829 type.go:168] "Request Body" body=""
	I1213 18:36:23.080747   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:23.081137   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.558605   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:23.581258   38829 type.go:168] "Request Body" body=""
	I1213 18:36:23.581331   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:23.581605   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.617607   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:23.617653   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:23.617671   38829 retry.go:31] will retry after 14.669686806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:24.081419   38829 type.go:168] "Request Body" body=""
	I1213 18:36:24.081508   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:24.081878   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:24.081930   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:24.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:36:24.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:24.581024   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:25.080794   38829 type.go:168] "Request Body" body=""
	I1213 18:36:25.080880   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:25.081347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:25.580742   38829 type.go:168] "Request Body" body=""
	I1213 18:36:25.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:25.581207   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:26.080781   38829 type.go:168] "Request Body" body=""
	I1213 18:36:26.080854   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:26.081166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:26.580764   38829 type.go:168] "Request Body" body=""
	I1213 18:36:26.580862   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:26.581247   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:26.581300   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:26.900727   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:26.960607   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:26.960668   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:26.960687   38829 retry.go:31] will retry after 15.397640826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:27.080883   38829 type.go:168] "Request Body" body=""
	I1213 18:36:27.080957   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:27.081297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:27.580637   38829 type.go:168] "Request Body" body=""
	I1213 18:36:27.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:27.580956   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:28.080641   38829 type.go:168] "Request Body" body=""
	I1213 18:36:28.080752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:28.081081   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:28.580963   38829 type.go:168] "Request Body" body=""
	I1213 18:36:28.581049   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:28.581366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:28.581418   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:29.081265   38829 type.go:168] "Request Body" body=""
	I1213 18:36:29.081330   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:29.081585   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:29.581341   38829 type.go:168] "Request Body" body=""
	I1213 18:36:29.581414   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:29.581724   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:30.083283   38829 type.go:168] "Request Body" body=""
	I1213 18:36:30.083370   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:30.083708   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:30.581559   38829 type.go:168] "Request Body" body=""
	I1213 18:36:30.581633   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:30.581902   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:30.581946   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:31.081665   38829 type.go:168] "Request Body" body=""
	I1213 18:36:31.081736   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:31.082102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:31.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:36:31.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:31.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:32.080588   38829 type.go:168] "Request Body" body=""
	I1213 18:36:32.080654   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:32.080909   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:32.581657   38829 type.go:168] "Request Body" body=""
	I1213 18:36:32.581734   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:32.582056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:32.582116   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:33.080787   38829 type.go:168] "Request Body" body=""
	I1213 18:36:33.080867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:33.081206   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:33.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:36:33.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:33.580998   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:34.080961   38829 type.go:168] "Request Body" body=""
	I1213 18:36:34.081065   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:34.081433   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:34.581228   38829 type.go:168] "Request Body" body=""
	I1213 18:36:34.581300   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:34.581636   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:35.081408   38829 type.go:168] "Request Body" body=""
	I1213 18:36:35.081478   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:35.081747   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:35.081790   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:35.581492   38829 type.go:168] "Request Body" body=""
	I1213 18:36:35.581568   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:35.581859   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:36.081553   38829 type.go:168] "Request Body" body=""
	I1213 18:36:36.081623   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:36.081928   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:36.581632   38829 type.go:168] "Request Body" body=""
	I1213 18:36:36.581711   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:36.582018   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:37.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:36:37.080804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:37.081189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:37.580917   38829 type.go:168] "Request Body" body=""
	I1213 18:36:37.580993   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:37.581352   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:37.581446   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:38.080688   38829 type.go:168] "Request Body" body=""
	I1213 18:36:38.080770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:38.081101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:38.287495   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:38.357240   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:38.360822   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:38.360853   38829 retry.go:31] will retry after 30.28485436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:38.581302   38829 type.go:168] "Request Body" body=""
	I1213 18:36:38.581374   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:38.581695   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:39.081218   38829 type.go:168] "Request Body" body=""
	I1213 18:36:39.081295   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:39.081664   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:39.581465   38829 type.go:168] "Request Body" body=""
	I1213 18:36:39.581533   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:39.581794   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:39.581852   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:40.081640   38829 type.go:168] "Request Body" body=""
	I1213 18:36:40.081724   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:40.082071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:40.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:36:40.580788   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:40.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:41.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:36:41.080801   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:41.081086   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:41.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:36:41.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:41.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:42.080831   38829 type.go:168] "Request Body" body=""
	I1213 18:36:42.080909   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:42.081302   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:42.081363   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:42.358603   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:42.430743   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:42.430803   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:42.430822   38829 retry.go:31] will retry after 12.093455046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:42.581106   38829 type.go:168] "Request Body" body=""
	I1213 18:36:42.581178   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:42.581444   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:43.081272   38829 type.go:168] "Request Body" body=""
	I1213 18:36:43.081354   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:43.081648   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:43.580658   38829 type.go:168] "Request Body" body=""
	I1213 18:36:43.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:43.581055   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:44.080685   38829 type.go:168] "Request Body" body=""
	I1213 18:36:44.080795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:44.081152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:44.580685   38829 type.go:168] "Request Body" body=""
	I1213 18:36:44.580759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:44.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:44.581161   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:45.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:36:45.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:45.081226   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:45.581071   38829 type.go:168] "Request Body" body=""
	I1213 18:36:45.581137   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:45.581415   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:46.081136   38829 type.go:168] "Request Body" body=""
	I1213 18:36:46.081217   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:46.081567   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:46.581397   38829 type.go:168] "Request Body" body=""
	I1213 18:36:46.581468   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:46.581797   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:46.581852   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:47.081586   38829 type.go:168] "Request Body" body=""
	I1213 18:36:47.081660   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:47.081917   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:47.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:36:47.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:47.581109   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:48.080824   38829 type.go:168] "Request Body" body=""
	I1213 18:36:48.080903   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:48.081209   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:48.581175   38829 type.go:168] "Request Body" body=""
	I1213 18:36:48.581241   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:48.581504   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:49.081596   38829 type.go:168] "Request Body" body=""
	I1213 18:36:49.081669   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:49.082029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:49.082084   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:49.580622   38829 type.go:168] "Request Body" body=""
	I1213 18:36:49.580704   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:49.581055   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:50.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:36:50.080823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:50.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:50.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:36:50.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:50.581174   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:51.080882   38829 type.go:168] "Request Body" body=""
	I1213 18:36:51.080963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:51.081341   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:51.580687   38829 type.go:168] "Request Body" body=""
	I1213 18:36:51.580761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:51.581057   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:51.581110   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:52.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:36:52.080817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:52.081192   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:52.580893   38829 type.go:168] "Request Body" body=""
	I1213 18:36:52.580986   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:52.581347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:53.080709   38829 type.go:168] "Request Body" body=""
	I1213 18:36:53.080779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:53.081063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:53.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:36:53.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:53.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:53.581240   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:54.081104   38829 type.go:168] "Request Body" body=""
	I1213 18:36:54.081173   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:54.081470   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:54.525326   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:54.580832   38829 type.go:168] "Request Body" body=""
	I1213 18:36:54.580898   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:54.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:54.600652   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:54.600694   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:54.600713   38829 retry.go:31] will retry after 41.212755678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:55.081498   38829 type.go:168] "Request Body" body=""
	I1213 18:36:55.081571   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:55.081915   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:55.580632   38829 type.go:168] "Request Body" body=""
	I1213 18:36:55.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:55.581066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:56.080716   38829 type.go:168] "Request Body" body=""
	I1213 18:36:56.080780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:56.081078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:56.081124   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:56.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:36:56.580847   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:56.581215   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:57.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:36:57.080904   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:57.081246   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:57.580702   38829 type.go:168] "Request Body" body=""
	I1213 18:36:57.580781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:57.581095   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:58.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:36:58.080815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:58.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:58.081230   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:58.580804   38829 type.go:168] "Request Body" body=""
	I1213 18:36:58.580886   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:58.581230   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:59.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:36:59.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:59.081167   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:59.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:36:59.580848   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:59.581262   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:00.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:37:00.081091   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:00.081411   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:00.081460   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:00.580690   38829 type.go:168] "Request Body" body=""
	I1213 18:37:00.580766   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:00.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:01.080673   38829 type.go:168] "Request Body" body=""
	I1213 18:37:01.080760   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:01.081112   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:01.580720   38829 type.go:168] "Request Body" body=""
	I1213 18:37:01.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:01.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:02.080753   38829 type.go:168] "Request Body" body=""
	I1213 18:37:02.080821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:02.081110   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:02.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:37:02.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:02.581155   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:02.581205   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:03.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:37:03.080823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:03.081153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:03.580615   38829 type.go:168] "Request Body" body=""
	I1213 18:37:03.580691   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:03.580974   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:04.080845   38829 type.go:168] "Request Body" body=""
	I1213 18:37:04.080916   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:04.081330   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:04.580902   38829 type.go:168] "Request Body" body=""
	I1213 18:37:04.581002   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:04.581380   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:04.581437   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:05.080788   38829 type.go:168] "Request Body" body=""
	I1213 18:37:05.080867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:05.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:05.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:37:05.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:05.581178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:06.080721   38829 type.go:168] "Request Body" body=""
	I1213 18:37:06.080796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:06.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:06.580658   38829 type.go:168] "Request Body" body=""
	I1213 18:37:06.580727   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:06.581063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:07.080796   38829 type.go:168] "Request Body" body=""
	I1213 18:37:07.080883   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:07.081219   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:07.081280   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:07.580756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:07.580835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:07.581166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.080678   38829 type.go:168] "Request Body" body=""
	I1213 18:37:08.080757   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:08.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.580840   38829 type.go:168] "Request Body" body=""
	I1213 18:37:08.580922   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:08.581286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.646539   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:37:08.707161   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:08.707197   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:37:08.707216   38829 retry.go:31] will retry after 43.904706278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:37:09.080730   38829 type.go:168] "Request Body" body=""
	I1213 18:37:09.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:09.081148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:09.580688   38829 type.go:168] "Request Body" body=""
	I1213 18:37:09.580756   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:09.581080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:09.581129   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:10.080738   38829 type.go:168] "Request Body" body=""
	I1213 18:37:10.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:10.081184   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:10.580752   38829 type.go:168] "Request Body" body=""
	I1213 18:37:10.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:10.581212   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:11.080819   38829 type.go:168] "Request Body" body=""
	I1213 18:37:11.080905   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:11.081275   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:11.580750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:11.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:11.581167   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:11.581218   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:12.080976   38829 type.go:168] "Request Body" body=""
	I1213 18:37:12.081075   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:12.081413   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:12.581163   38829 type.go:168] "Request Body" body=""
	I1213 18:37:12.581239   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:12.581504   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:13.081350   38829 type.go:168] "Request Body" body=""
	I1213 18:37:13.081422   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:13.081759   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:13.581540   38829 type.go:168] "Request Body" body=""
	I1213 18:37:13.581621   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:13.581958   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:13.582012   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:14.080637   38829 type.go:168] "Request Body" body=""
	I1213 18:37:14.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:14.081037   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:14.580751   38829 type.go:168] "Request Body" body=""
	I1213 18:37:14.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:14.581126   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:15.080809   38829 type.go:168] "Request Body" body=""
	I1213 18:37:15.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:15.081289   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:15.580701   38829 type.go:168] "Request Body" body=""
	I1213 18:37:15.580784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:15.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:16.080844   38829 type.go:168] "Request Body" body=""
	I1213 18:37:16.080922   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:16.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:16.081285   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:16.580898   38829 type.go:168] "Request Body" body=""
	I1213 18:37:16.581034   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:16.581399   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:17.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:17.080737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:17.080990   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:17.580692   38829 type.go:168] "Request Body" body=""
	I1213 18:37:17.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:17.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:18.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:18.080868   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:18.081221   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:18.581194   38829 type.go:168] "Request Body" body=""
	I1213 18:37:18.581282   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:18.581589   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:18.581661   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:19.080720   38829 type.go:168] "Request Body" body=""
	I1213 18:37:19.080794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:19.081153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:19.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:37:19.580807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:19.581139   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:20.080683   38829 type.go:168] "Request Body" body=""
	I1213 18:37:20.080783   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:20.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:20.580699   38829 type.go:168] "Request Body" body=""
	I1213 18:37:20.580768   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:20.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:21.080704   38829 type.go:168] "Request Body" body=""
	I1213 18:37:21.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:21.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:21.081200   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:21.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:37:21.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:21.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:22.080770   38829 type.go:168] "Request Body" body=""
	I1213 18:37:22.080878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:22.081249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:22.580823   38829 type.go:168] "Request Body" body=""
	I1213 18:37:22.580919   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:22.581227   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:23.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:37:23.080740   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:23.081069   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:23.580725   38829 type.go:168] "Request Body" body=""
	I1213 18:37:23.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:23.581144   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:23.581194   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:24.081109   38829 type.go:168] "Request Body" body=""
	I1213 18:37:24.081180   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:24.081522   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:24.581618   38829 type.go:168] "Request Body" body=""
	I1213 18:37:24.581687   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:24.582010   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:25.080756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:25.080839   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:25.081197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:25.580943   38829 type.go:168] "Request Body" body=""
	I1213 18:37:25.581038   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:25.581354   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:25.581416   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:26.080723   38829 type.go:168] "Request Body" body=""
	I1213 18:37:26.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:26.081227   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:26.580735   38829 type.go:168] "Request Body" body=""
	I1213 18:37:26.580817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:26.581160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:27.080700   38829 type.go:168] "Request Body" body=""
	I1213 18:37:27.080784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:27.081126   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:27.580667   38829 type.go:168] "Request Body" body=""
	I1213 18:37:27.580751   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:27.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:28.080604   38829 type.go:168] "Request Body" body=""
	I1213 18:37:28.080698   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:28.081045   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:28.081097   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:28.580817   38829 type.go:168] "Request Body" body=""
	I1213 18:37:28.580906   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:28.581222   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:29.080796   38829 type.go:168] "Request Body" body=""
	I1213 18:37:29.080873   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:29.081151   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:29.580777   38829 type.go:168] "Request Body" body=""
	I1213 18:37:29.580870   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:29.581199   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:30.080803   38829 type.go:168] "Request Body" body=""
	I1213 18:37:30.080884   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:30.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:30.081287   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:30.580672   38829 type.go:168] "Request Body" body=""
	I1213 18:37:30.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:30.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:31.081506   38829 type.go:168] "Request Body" body=""
	I1213 18:37:31.081581   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:31.081922   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:31.580645   38829 type.go:168] "Request Body" body=""
	I1213 18:37:31.580718   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:31.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:32.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:32.080783   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:32.081114   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:32.580825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:32.580936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:32.581248   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:32.581295   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:33.080746   38829 type.go:168] "Request Body" body=""
	I1213 18:37:33.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:33.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:33.580676   38829 type.go:168] "Request Body" body=""
	I1213 18:37:33.580750   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:33.581029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:34.081646   38829 type.go:168] "Request Body" body=""
	I1213 18:37:34.081715   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:34.082009   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:34.580682   38829 type.go:168] "Request Body" body=""
	I1213 18:37:34.580780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:34.581134   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:35.080825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:35.080895   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:35.081246   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:35.081298   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:35.580940   38829 type.go:168] "Request Body" body=""
	I1213 18:37:35.581051   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:35.581350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:35.813701   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:37:35.887144   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:35.887179   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:35.887279   38829 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 18:37:36.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:36.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:36.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:36.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:37:36.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:36.581058   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:37.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:37:37.080814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:37.081161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:37.580851   38829 type.go:168] "Request Body" body=""
	I1213 18:37:37.580926   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:37.581239   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:37.581288   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:38.080774   38829 type.go:168] "Request Body" body=""
	I1213 18:37:38.080865   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:38.081305   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:38.581237   38829 type.go:168] "Request Body" body=""
	I1213 18:37:38.581321   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:38.581645   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:39.081533   38829 type.go:168] "Request Body" body=""
	I1213 18:37:39.081612   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:39.081897   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:39.581503   38829 type.go:168] "Request Body" body=""
	I1213 18:37:39.581567   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:39.581828   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:39.581866   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:40.081636   38829 type.go:168] "Request Body" body=""
	I1213 18:37:40.081710   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:40.082035   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:40.580686   38829 type.go:168] "Request Body" body=""
	I1213 18:37:40.580764   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:40.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:41.080659   38829 type.go:168] "Request Body" body=""
	I1213 18:37:41.080744   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:41.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:41.580856   38829 type.go:168] "Request Body" body=""
	I1213 18:37:41.580929   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:41.581268   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:42.080912   38829 type.go:168] "Request Body" body=""
	I1213 18:37:42.081054   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:42.081405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:42.081473   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:42.581188   38829 type.go:168] "Request Body" body=""
	I1213 18:37:42.581268   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:42.581539   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:43.081397   38829 type.go:168] "Request Body" body=""
	I1213 18:37:43.081474   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:43.081823   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:43.581624   38829 type.go:168] "Request Body" body=""
	I1213 18:37:43.581704   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:43.582019   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:44.081168   38829 type.go:168] "Request Body" body=""
	I1213 18:37:44.081243   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:44.081539   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:44.081581   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:44.581405   38829 type.go:168] "Request Body" body=""
	I1213 18:37:44.581481   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:44.581805   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:45.081836   38829 type.go:168] "Request Body" body=""
	I1213 18:37:45.081938   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:45.082358   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:45.580699   38829 type.go:168] "Request Body" body=""
	I1213 18:37:45.580773   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:45.581090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:46.080825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:46.080898   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:46.081231   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:46.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:37:46.580818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:46.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:46.581235   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:47.080684   38829 type.go:168] "Request Body" body=""
	I1213 18:37:47.080759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:47.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:47.580848   38829 type.go:168] "Request Body" body=""
	I1213 18:37:47.580921   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:47.581277   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:48.080712   38829 type.go:168] "Request Body" body=""
	I1213 18:37:48.080804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:48.081135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:48.580811   38829 type.go:168] "Request Body" body=""
	I1213 18:37:48.580882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:48.581154   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:49.081058   38829 type.go:168] "Request Body" body=""
	I1213 18:37:49.081150   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:49.081477   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:49.081542   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:49.581293   38829 type.go:168] "Request Body" body=""
	I1213 18:37:49.581370   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:49.581713   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:50.081496   38829 type.go:168] "Request Body" body=""
	I1213 18:37:50.081562   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:50.081847   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:50.581629   38829 type.go:168] "Request Body" body=""
	I1213 18:37:50.581706   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:50.582071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:51.080700   38829 type.go:168] "Request Body" body=""
	I1213 18:37:51.080790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:51.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:51.580683   38829 type.go:168] "Request Body" body=""
	I1213 18:37:51.580754   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:51.581047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:51.581094   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:52.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:37:52.080787   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:52.081175   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:52.580775   38829 type.go:168] "Request Body" body=""
	I1213 18:37:52.580867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:52.581254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:52.612466   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:37:52.672905   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:52.677070   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:52.677165   38829 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 18:37:52.680309   38829 out.go:179] * Enabled addons: 
	I1213 18:37:52.684021   38829 addons.go:530] duration metric: took 1m54.470472162s for enable addons: enabled=[]
	I1213 18:37:53.081534   38829 type.go:168] "Request Body" body=""
	I1213 18:37:53.081600   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:53.081904   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:53.580635   38829 type.go:168] "Request Body" body=""
	I1213 18:37:53.580711   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:53.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:54.080643   38829 type.go:168] "Request Body" body=""
	I1213 18:37:54.080739   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:54.082029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1213 18:37:54.082091   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:54.581623   38829 type.go:168] "Request Body" body=""
	I1213 18:37:54.581698   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:54.581957   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:55.080687   38829 type.go:168] "Request Body" body=""
	I1213 18:37:55.080780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:55.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:55.580756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:55.580828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:55.581197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:56.080640   38829 type.go:168] "Request Body" body=""
	I1213 18:37:56.080714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:56.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:56.580613   38829 type.go:168] "Request Body" body=""
	I1213 18:37:56.580689   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:56.581045   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:56.581101   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:57.080597   38829 type.go:168] "Request Body" body=""
	I1213 18:37:57.080691   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:57.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:57.580930   38829 type.go:168] "Request Body" body=""
	I1213 18:37:57.581038   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:57.585714   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 18:37:58.081512   38829 type.go:168] "Request Body" body=""
	I1213 18:37:58.081591   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:58.081945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:58.580703   38829 type.go:168] "Request Body" body=""
	I1213 18:37:58.580778   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:58.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:58.581214   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:59.081515   38829 type.go:168] "Request Body" body=""
	I1213 18:37:59.081606   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:59.081931   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:59.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:59.580732   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:59.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:00.080803   38829 type.go:168] "Request Body" body=""
	I1213 18:38:00.080888   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:00.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:00.581619   38829 type.go:168] "Request Body" body=""
	I1213 18:38:00.581690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:00.582027   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:00.582084   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:01.080751   38829 type.go:168] "Request Body" body=""
	I1213 18:38:01.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:01.081194   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:01.580724   38829 type.go:168] "Request Body" body=""
	I1213 18:38:01.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:01.581152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:02.080668   38829 type.go:168] "Request Body" body=""
	I1213 18:38:02.080746   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:02.081102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:02.580776   38829 type.go:168] "Request Body" body=""
	I1213 18:38:02.580850   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:02.581187   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:03.080936   38829 type.go:168] "Request Body" body=""
	I1213 18:38:03.081031   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:03.081349   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:03.081405   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:03.580669   38829 type.go:168] "Request Body" body=""
	I1213 18:38:03.580767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:03.581056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:04.080818   38829 type.go:168] "Request Body" body=""
	I1213 18:38:04.080899   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:04.081235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:04.580930   38829 type.go:168] "Request Body" body=""
	I1213 18:38:04.581025   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:04.581369   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:05.080659   38829 type.go:168] "Request Body" body=""
	I1213 18:38:05.080743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:05.081076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:05.580757   38829 type.go:168] "Request Body" body=""
	I1213 18:38:05.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:05.581176   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:05.581227   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:06.080773   38829 type.go:168] "Request Body" body=""
	I1213 18:38:06.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:06.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:06.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:38:06.580751   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:06.581040   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:07.080776   38829 type.go:168] "Request Body" body=""
	I1213 18:38:07.080848   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:07.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:07.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:07.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:07.581160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:08.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:38:08.080849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:08.081161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:08.081226   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:08.580947   38829 type.go:168] "Request Body" body=""
	I1213 18:38:08.581044   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:08.581405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:09.081557   38829 type.go:168] "Request Body" body=""
	I1213 18:38:09.081630   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:09.081955   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:09.580701   38829 type.go:168] "Request Body" body=""
	I1213 18:38:09.580777   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:09.581100   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:10.080747   38829 type.go:168] "Request Body" body=""
	I1213 18:38:10.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:10.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:10.081288   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:10.580771   38829 type.go:168] "Request Body" body=""
	I1213 18:38:10.580886   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:10.581218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:11.080922   38829 type.go:168] "Request Body" body=""
	I1213 18:38:11.080992   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:11.081274   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:11.581973   38829 type.go:168] "Request Body" body=""
	I1213 18:38:11.582052   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:11.582377   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:12.081104   38829 type.go:168] "Request Body" body=""
	I1213 18:38:12.081179   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:12.081532   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:12.081585   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:12.581355   38829 type.go:168] "Request Body" body=""
	I1213 18:38:12.581430   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:12.581762   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:13.081529   38829 type.go:168] "Request Body" body=""
	I1213 18:38:13.081604   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:13.081921   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:13.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:38:13.580716   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:13.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:14.081616   38829 type.go:168] "Request Body" body=""
	I1213 18:38:14.081703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:14.082037   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:14.082090   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:14.580727   38829 type.go:168] "Request Body" body=""
	I1213 18:38:14.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:14.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:15.080903   38829 type.go:168] "Request Body" body=""
	I1213 18:38:15.080982   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:15.081338   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:15.581041   38829 type.go:168] "Request Body" body=""
	I1213 18:38:15.581119   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:15.581474   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:16.081265   38829 type.go:168] "Request Body" body=""
	I1213 18:38:16.081338   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:16.081665   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:16.581493   38829 type.go:168] "Request Body" body=""
	I1213 18:38:16.581589   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:16.581945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:16.581999   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:17.080642   38829 type.go:168] "Request Body" body=""
	I1213 18:38:17.080713   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:17.080986   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:17.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:38:17.580796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:17.581138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:18.080868   38829 type.go:168] "Request Body" body=""
	I1213 18:38:18.080948   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:18.081331   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:18.581194   38829 type.go:168] "Request Body" body=""
	I1213 18:38:18.581268   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:18.581529   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:19.081522   38829 type.go:168] "Request Body" body=""
	I1213 18:38:19.081598   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:19.081945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:19.082001   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:19.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:38:19.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:19.581171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:20.080873   38829 type.go:168] "Request Body" body=""
	I1213 18:38:20.080948   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:20.081259   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:20.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:38:20.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:20.581178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:21.080749   38829 type.go:168] "Request Body" body=""
	I1213 18:38:21.080849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:21.081219   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:21.580655   38829 type.go:168] "Request Body" body=""
	I1213 18:38:21.580730   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:21.581101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:21.581180   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:22.080740   38829 type.go:168] "Request Body" body=""
	I1213 18:38:22.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:22.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:22.580922   38829 type.go:168] "Request Body" body=""
	I1213 18:38:22.581020   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:22.581389   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:23.080725   38829 type.go:168] "Request Body" body=""
	I1213 18:38:23.080802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:23.081145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:23.580880   38829 type.go:168] "Request Body" body=""
	I1213 18:38:23.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:23.581338   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:23.581392   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:24.081664   38829 type.go:168] "Request Body" body=""
	I1213 18:38:24.081759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:24.082117   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:24.580825   38829 type.go:168] "Request Body" body=""
	I1213 18:38:24.580901   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:24.581233   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:25.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:25.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:25.081203   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:25.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:38:25.580807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:25.581142   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:26.080689   38829 type.go:168] "Request Body" body=""
	I1213 18:38:26.080779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:26.081103   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:26.081156   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:26.580750   38829 type.go:168] "Request Body" body=""
	I1213 18:38:26.580831   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:26.581177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:27.080736   38829 type.go:168] "Request Body" body=""
	I1213 18:38:27.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:27.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:27.580696   38829 type.go:168] "Request Body" body=""
	I1213 18:38:27.580770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:27.581094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:28.080768   38829 type.go:168] "Request Body" body=""
	I1213 18:38:28.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:28.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:28.081197   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:28.581180   38829 type.go:168] "Request Body" body=""
	I1213 18:38:28.581274   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:28.581646   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:29.080821   38829 type.go:168] "Request Body" body=""
	I1213 18:38:29.080892   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:29.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:29.580951   38829 type.go:168] "Request Body" body=""
	I1213 18:38:29.581053   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:29.581390   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:30.080799   38829 type.go:168] "Request Body" body=""
	I1213 18:38:30.080882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:30.081350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:30.081432   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:30.580706   38829 type.go:168] "Request Body" body=""
	I1213 18:38:30.580834   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:30.581124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:31.080774   38829 type.go:168] "Request Body" body=""
	I1213 18:38:31.080864   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:31.081259   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:31.580984   38829 type.go:168] "Request Body" body=""
	I1213 18:38:31.581082   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:31.581450   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:32.080667   38829 type.go:168] "Request Body" body=""
	I1213 18:38:32.080743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:32.081034   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:32.580743   38829 type.go:168] "Request Body" body=""
	I1213 18:38:32.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:32.581200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:32.581255   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:33.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:38:33.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:33.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:33.580725   38829 type.go:168] "Request Body" body=""
	I1213 18:38:33.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:33.581164   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:34.081257   38829 type.go:168] "Request Body" body=""
	I1213 18:38:34.081337   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:34.081668   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:34.581504   38829 type.go:168] "Request Body" body=""
	I1213 18:38:34.581582   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:34.581919   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:34.581974   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:35.080651   38829 type.go:168] "Request Body" body=""
	I1213 18:38:35.080731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:35.081024   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:35.580713   38829 type.go:168] "Request Body" body=""
	I1213 18:38:35.580792   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:35.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:36.080919   38829 type.go:168] "Request Body" body=""
	I1213 18:38:36.080998   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:36.081335   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:36.580681   38829 type.go:168] "Request Body" body=""
	I1213 18:38:36.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:36.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:37.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:38:37.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:37.081165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:37.081218   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:37.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:38:37.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:37.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:38.080691   38829 type.go:168] "Request Body" body=""
	I1213 18:38:38.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:38.081186   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:38.581125   38829 type.go:168] "Request Body" body=""
	I1213 18:38:38.581202   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:38.581601   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:39.081372   38829 type.go:168] "Request Body" body=""
	I1213 18:38:39.081450   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:39.081746   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:39.081795   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:39.581476   38829 type.go:168] "Request Body" body=""
	I1213 18:38:39.581574   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:39.581834   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:40.080652   38829 type.go:168] "Request Body" body=""
	I1213 18:38:40.080736   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:40.081070   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:40.580762   38829 type.go:168] "Request Body" body=""
	I1213 18:38:40.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:40.581170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:41.080790   38829 type.go:168] "Request Body" body=""
	I1213 18:38:41.080859   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:41.081138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:41.580736   38829 type.go:168] "Request Body" body=""
	I1213 18:38:41.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:41.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:41.581213   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:42.081232   38829 type.go:168] "Request Body" body=""
	I1213 18:38:42.081358   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:42.081865   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:42.580689   38829 type.go:168] "Request Body" body=""
	I1213 18:38:42.580771   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:42.581121   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:43.080823   38829 type.go:168] "Request Body" body=""
	I1213 18:38:43.080907   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:43.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:43.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:38:43.580836   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:43.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:44.081575   38829 type.go:168] "Request Body" body=""
	I1213 18:38:44.081651   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:44.081974   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:44.082018   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:44.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:38:44.580850   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:44.581196   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:45.080840   38829 type.go:168] "Request Body" body=""
	I1213 18:38:45.080920   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:45.081286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:45.580954   38829 type.go:168] "Request Body" body=""
	I1213 18:38:45.581055   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:45.581346   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:46.081059   38829 type.go:168] "Request Body" body=""
	I1213 18:38:46.081132   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:46.081421   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:46.581118   38829 type.go:168] "Request Body" body=""
	I1213 18:38:46.581200   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:46.581535   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:46.581590   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:47.081106   38829 type.go:168] "Request Body" body=""
	I1213 18:38:47.081224   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:47.081480   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:47.581264   38829 type.go:168] "Request Body" body=""
	I1213 18:38:47.581336   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:47.581677   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:48.081348   38829 type.go:168] "Request Body" body=""
	I1213 18:38:48.081420   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:48.081786   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:48.580712   38829 type.go:168] "Request Body" body=""
	I1213 18:38:48.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:48.581132   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:49.081267   38829 type.go:168] "Request Body" body=""
	I1213 18:38:49.081338   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:49.081661   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:49.081719   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:49.581307   38829 type.go:168] "Request Body" body=""
	I1213 18:38:49.581390   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:49.581723   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:50.081491   38829 type.go:168] "Request Body" body=""
	I1213 18:38:50.081558   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:50.081836   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:50.581617   38829 type.go:168] "Request Body" body=""
	I1213 18:38:50.581690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:50.582006   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:51.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:51.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:51.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:51.580635   38829 type.go:168] "Request Body" body=""
	I1213 18:38:51.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:51.581040   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:51.581092   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:52.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:52.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:52.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:52.580897   38829 type.go:168] "Request Body" body=""
	I1213 18:38:52.580975   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:52.581319   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:53.081002   38829 type.go:168] "Request Body" body=""
	I1213 18:38:53.081090   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:53.081366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:53.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:38:53.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:53.581210   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:53.581264   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:54.081117   38829 type.go:168] "Request Body" body=""
	I1213 18:38:54.081197   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:54.081547   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:54.581298   38829 type.go:168] "Request Body" body=""
	I1213 18:38:54.581371   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:54.581643   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:55.081403   38829 type.go:168] "Request Body" body=""
	I1213 18:38:55.081482   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:55.081842   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:55.581455   38829 type.go:168] "Request Body" body=""
	I1213 18:38:55.581534   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:55.581851   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:55.581906   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:56.080602   38829 type.go:168] "Request Body" body=""
	I1213 18:38:56.080680   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:56.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:56.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:56.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:56.581197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:57.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:38:57.080844   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:57.081204   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:57.580625   38829 type.go:168] "Request Body" body=""
	I1213 18:38:57.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:57.580967   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:58.080697   38829 type.go:168] "Request Body" body=""
	I1213 18:38:58.080767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:58.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:58.081121   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:58.580746   38829 type.go:168] "Request Body" body=""
	I1213 18:38:58.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:58.581193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:59.080619   38829 type.go:168] "Request Body" body=""
	I1213 18:38:59.080690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:59.080957   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:59.580697   38829 type.go:168] "Request Body" body=""
	I1213 18:38:59.580775   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:59.581075   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:00.080781   38829 type.go:168] "Request Body" body=""
	I1213 18:39:00.080864   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:00.081214   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:00.081263   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:00.580868   38829 type.go:168] "Request Body" body=""
	I1213 18:39:00.580959   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:00.581261   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:01.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:39:01.080795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:01.081160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:01.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:01.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:01.581212   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:02.080885   38829 type.go:168] "Request Body" body=""
	I1213 18:39:02.080961   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:02.081256   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:02.081306   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:02.580741   38829 type.go:168] "Request Body" body=""
	I1213 18:39:02.580818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:02.581177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:03.080736   38829 type.go:168] "Request Body" body=""
	I1213 18:39:03.080810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:03.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:03.580700   38829 type.go:168] "Request Body" body=""
	I1213 18:39:03.580773   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:03.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:04.080632   38829 type.go:168] "Request Body" body=""
	I1213 18:39:04.080714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:04.081077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:04.580778   38829 type.go:168] "Request Body" body=""
	I1213 18:39:04.580863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:04.581243   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:04.581303   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:05.080687   38829 type.go:168] "Request Body" body=""
	I1213 18:39:05.080765   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:05.081059   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:05.580796   38829 type.go:168] "Request Body" body=""
	I1213 18:39:05.580872   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:05.581215   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:06.080727   38829 type.go:168] "Request Body" body=""
	I1213 18:39:06.080803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:06.081158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:06.580837   38829 type.go:168] "Request Body" body=""
	I1213 18:39:06.580917   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:06.581202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:07.080725   38829 type.go:168] "Request Body" body=""
	I1213 18:39:07.080808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:07.081164   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:07.081214   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:07.580716   38829 type.go:168] "Request Body" body=""
	I1213 18:39:07.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:07.581129   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:08.080858   38829 type.go:168] "Request Body" body=""
	I1213 18:39:08.080931   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:08.081213   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:08.581137   38829 type.go:168] "Request Body" body=""
	I1213 18:39:08.581207   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:08.581513   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:09.081065   38829 type.go:168] "Request Body" body=""
	I1213 18:39:09.081139   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:09.081514   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:09.081581   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:09.581276   38829 type.go:168] "Request Body" body=""
	I1213 18:39:09.581342   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:09.581644   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:10.081407   38829 type.go:168] "Request Body" body=""
	I1213 18:39:10.081483   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:10.081851   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:10.581496   38829 type.go:168] "Request Body" body=""
	I1213 18:39:10.581567   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:10.581887   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:11.080629   38829 type.go:168] "Request Body" body=""
	I1213 18:39:11.080701   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:11.081001   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:11.580726   38829 type.go:168] "Request Body" body=""
	I1213 18:39:11.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:11.581121   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:11.581171   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:12.080760   38829 type.go:168] "Request Body" body=""
	I1213 18:39:12.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:12.081152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:12.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:39:12.580744   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:12.581068   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:13.080734   38829 type.go:168] "Request Body" body=""
	I1213 18:39:13.080808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:13.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:13.580863   38829 type.go:168] "Request Body" body=""
	I1213 18:39:13.580937   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:13.581281   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:13.581332   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:14.081577   38829 type.go:168] "Request Body" body=""
	I1213 18:39:14.081653   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:14.081950   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:14.580638   38829 type.go:168] "Request Body" body=""
	I1213 18:39:14.580713   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:14.581046   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:15.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:39:15.080825   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:15.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:15.580864   38829 type.go:168] "Request Body" body=""
	I1213 18:39:15.580936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:15.581210   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:16.080732   38829 type.go:168] "Request Body" body=""
	I1213 18:39:16.080807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:16.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:16.081237   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:16.580894   38829 type.go:168] "Request Body" body=""
	I1213 18:39:16.580969   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:16.581301   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:17.080988   38829 type.go:168] "Request Body" body=""
	I1213 18:39:17.081089   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:17.081420   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:17.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:39:17.580844   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:17.581202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:18.080887   38829 type.go:168] "Request Body" body=""
	I1213 18:39:18.080962   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:18.081285   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:18.081330   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:18.581099   38829 type.go:168] "Request Body" body=""
	I1213 18:39:18.581170   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:18.581423   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:19.081384   38829 type.go:168] "Request Body" body=""
	I1213 18:39:19.081453   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:19.081768   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:19.581414   38829 type.go:168] "Request Body" body=""
	I1213 18:39:19.581490   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:19.581786   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:20.081602   38829 type.go:168] "Request Body" body=""
	I1213 18:39:20.081678   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:20.081965   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:20.082018   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:20.580679   38829 type.go:168] "Request Body" body=""
	I1213 18:39:20.580788   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:20.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:21.080703   38829 type.go:168] "Request Body" body=""
	I1213 18:39:21.080796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:21.081146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:21.580784   38829 type.go:168] "Request Body" body=""
	I1213 18:39:21.580863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:21.581224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:22.080782   38829 type.go:168] "Request Body" body=""
	I1213 18:39:22.080855   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:22.081300   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:22.580762   38829 type.go:168] "Request Body" body=""
	I1213 18:39:22.580835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:22.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:22.581194   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:23.080788   38829 type.go:168] "Request Body" body=""
	I1213 18:39:23.080860   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:23.081193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:23.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:39:23.580820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:23.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:24.081435   38829 type.go:168] "Request Body" body=""
	I1213 18:39:24.081530   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:24.081884   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:24.581587   38829 type.go:168] "Request Body" body=""
	I1213 18:39:24.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:24.581912   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:24.581951   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:25.080657   38829 type.go:168] "Request Body" body=""
	I1213 18:39:25.080734   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:25.081179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:25.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:39:25.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:25.581190   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:26.080869   38829 type.go:168] "Request Body" body=""
	I1213 18:39:26.080936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:26.081224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:26.580741   38829 type.go:168] "Request Body" body=""
	I1213 18:39:26.580814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:26.581148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:27.080703   38829 type.go:168] "Request Body" body=""
	I1213 18:39:27.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:27.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:27.081165   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:27.580724   38829 type.go:168] "Request Body" body=""
	I1213 18:39:27.580797   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:27.581139   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:28.080722   38829 type.go:168] "Request Body" body=""
	I1213 18:39:28.080793   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:28.081199   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:28.580834   38829 type.go:168] "Request Body" body=""
	I1213 18:39:28.580915   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:28.581280   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:29.081285   38829 type.go:168] "Request Body" body=""
	I1213 18:39:29.081351   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:29.081628   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:29.081672   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:29.581065   38829 type.go:168] "Request Body" body=""
	I1213 18:39:29.581140   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:29.581481   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:30.081344   38829 type.go:168] "Request Body" body=""
	I1213 18:39:30.081439   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:30.081896   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:30.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:39:30.580748   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:30.581066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:31.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:39:31.080834   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:31.081162   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:31.580866   38829 type.go:168] "Request Body" body=""
	I1213 18:39:31.580942   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:31.581337   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:31.581394   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:32.080782   38829 type.go:168] "Request Body" body=""
	I1213 18:39:32.080853   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:32.081134   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:32.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:32.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:32.581200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:33.080901   38829 type.go:168] "Request Body" body=""
	I1213 18:39:33.080972   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:33.081318   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:33.580802   38829 type.go:168] "Request Body" body=""
	I1213 18:39:33.580878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:33.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:34.080872   38829 type.go:168] "Request Body" body=""
	I1213 18:39:34.080943   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:34.081303   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:34.081358   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:34.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:39:34.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:34.581136   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:35.080815   38829 type.go:168] "Request Body" body=""
	I1213 18:39:35.080883   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:35.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:35.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:39:35.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:35.581133   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:36.080735   38829 type.go:168] "Request Body" body=""
	I1213 18:39:36.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:36.081172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:36.580859   38829 type.go:168] "Request Body" body=""
	I1213 18:39:36.580941   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:36.581223   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:36.581264   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:37.080720   38829 type.go:168] "Request Body" body=""
	I1213 18:39:37.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:37.081267   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:37.580761   38829 type.go:168] "Request Body" body=""
	I1213 18:39:37.580833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:37.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:38.080809   38829 type.go:168] "Request Body" body=""
	I1213 18:39:38.080881   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:38.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:38.581160   38829 type.go:168] "Request Body" body=""
	I1213 18:39:38.581229   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:38.581546   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:38.581608   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:39.081316   38829 type.go:168] "Request Body" body=""
	I1213 18:39:39.081387   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:39.081699   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:39.581307   38829 type.go:168] "Request Body" body=""
	I1213 18:39:39.581382   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:39.581710   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:40.081503   38829 type.go:168] "Request Body" body=""
	I1213 18:39:40.081578   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:40.081882   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:40.581632   38829 type.go:168] "Request Body" body=""
	I1213 18:39:40.581730   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:40.582090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:40.582139   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:41.080640   38829 type.go:168] "Request Body" body=""
	I1213 18:39:41.080710   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:41.081046   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:41.580670   38829 type.go:168] "Request Body" body=""
	I1213 18:39:41.580748   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:41.581076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:42.080797   38829 type.go:168] "Request Body" body=""
	I1213 18:39:42.080878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:42.081282   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:42.580711   38829 type.go:168] "Request Body" body=""
	I1213 18:39:42.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:42.581132   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:43.080747   38829 type.go:168] "Request Body" body=""
	I1213 18:39:43.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:43.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:43.081283   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:43.580965   38829 type.go:168] "Request Body" body=""
	I1213 18:39:43.581057   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:43.581416   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:44.081437   38829 type.go:168] "Request Body" body=""
	I1213 18:39:44.081507   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:44.081776   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:44.581633   38829 type.go:168] "Request Body" body=""
	I1213 18:39:44.581707   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:44.582020   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:45.080770   38829 type.go:168] "Request Body" body=""
	I1213 18:39:45.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:45.081375   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:45.081434   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:45.581089   38829 type.go:168] "Request Body" body=""
	I1213 18:39:45.581158   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:45.581469   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:46.080755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:46.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:46.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:46.580794   38829 type.go:168] "Request Body" body=""
	I1213 18:39:46.580865   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:46.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:47.080689   38829 type.go:168] "Request Body" body=""
	I1213 18:39:47.080768   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:47.081094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:47.580669   38829 type.go:168] "Request Body" body=""
	I1213 18:39:47.580763   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:47.581109   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:47.581164   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:48.080848   38829 type.go:168] "Request Body" body=""
	I1213 18:39:48.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:48.081228   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:48.581237   38829 type.go:168] "Request Body" body=""
	I1213 18:39:48.581311   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:48.581637   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:49.081081   38829 type.go:168] "Request Body" body=""
	I1213 18:39:49.081164   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:49.081471   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:49.581258   38829 type.go:168] "Request Body" body=""
	I1213 18:39:49.581336   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:49.581617   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:49.581664   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:50.081346   38829 type.go:168] "Request Body" body=""
	I1213 18:39:50.081416   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:50.081693   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:50.581552   38829 type.go:168] "Request Body" body=""
	I1213 18:39:50.581621   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:50.581942   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:51.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:39:51.080806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:51.081235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:51.580885   38829 type.go:168] "Request Body" body=""
	I1213 18:39:51.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:51.581315   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:52.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:39:52.080811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:52.081193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:52.081249   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:52.580704   38829 type.go:168] "Request Body" body=""
	I1213 18:39:52.580784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:52.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:53.080692   38829 type.go:168] "Request Body" body=""
	I1213 18:39:53.080761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:53.081060   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:53.580744   38829 type.go:168] "Request Body" body=""
	I1213 18:39:53.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:53.581232   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:54.081089   38829 type.go:168] "Request Body" body=""
	I1213 18:39:54.081164   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:54.081658   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:54.081712   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:54.581346   38829 type.go:168] "Request Body" body=""
	I1213 18:39:54.581418   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:54.581673   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:55.081499   38829 type.go:168] "Request Body" body=""
	I1213 18:39:55.081596   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:55.081941   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:55.580685   38829 type.go:168] "Request Body" body=""
	I1213 18:39:55.580777   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:55.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:56.080674   38829 type.go:168] "Request Body" body=""
	I1213 18:39:56.080750   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:56.081047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:56.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:39:56.580778   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:56.581204   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:56.581262   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:57.080917   38829 type.go:168] "Request Body" body=""
	I1213 18:39:57.081002   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:57.081366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:57.580664   38829 type.go:168] "Request Body" body=""
	I1213 18:39:57.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:57.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:58.081028   38829 type.go:168] "Request Body" body=""
	I1213 18:39:58.081122   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:58.081478   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:58.581557   38829 type.go:168] "Request Body" body=""
	I1213 18:39:58.581639   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:58.582001   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:58.582075   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:59.081358   38829 type.go:168] "Request Body" body=""
	I1213 18:39:59.081453   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:59.081774   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:59.581595   38829 type.go:168] "Request Body" body=""
	I1213 18:39:59.581667   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:59.581967   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:00.080718   38829 type.go:168] "Request Body" body=""
	I1213 18:40:00.080803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:00.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:00.582760   38829 type.go:168] "Request Body" body=""
	I1213 18:40:00.582857   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:00.583187   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:00.583244   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:01.080684   38829 type.go:168] "Request Body" body=""
	I1213 18:40:01.080755   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:01.081087   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:01.580820   38829 type.go:168] "Request Body" body=""
	I1213 18:40:01.580895   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:01.581240   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:02.080921   38829 type.go:168] "Request Body" body=""
	I1213 18:40:02.080993   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:02.081270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:02.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:02.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:02.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:03.080880   38829 type.go:168] "Request Body" body=""
	I1213 18:40:03.080955   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:03.081306   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:03.081361   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:03.580996   38829 type.go:168] "Request Body" body=""
	I1213 18:40:03.581076   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:03.581335   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:04.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:04.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:04.081183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:04.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:04.580808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:04.581149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:05.080850   38829 type.go:168] "Request Body" body=""
	I1213 18:40:05.080927   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:05.081263   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:05.580963   38829 type.go:168] "Request Body" body=""
	I1213 18:40:05.581056   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:05.581401   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:05.581460   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:06.081245   38829 type.go:168] "Request Body" body=""
	I1213 18:40:06.081316   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:06.081669   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:06.581426   38829 type.go:168] "Request Body" body=""
	I1213 18:40:06.581509   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:06.581848   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:07.081645   38829 type.go:168] "Request Body" body=""
	I1213 18:40:07.081722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:07.082062   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:07.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:07.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:07.581162   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:08.080728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:08.080798   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:08.081088   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:08.081131   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:08.580917   38829 type.go:168] "Request Body" body=""
	I1213 18:40:08.580997   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:08.581369   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:09.081067   38829 type.go:168] "Request Body" body=""
	I1213 18:40:09.081141   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:09.081470   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:09.581192   38829 type.go:168] "Request Body" body=""
	I1213 18:40:09.581258   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:09.581523   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:10.081376   38829 type.go:168] "Request Body" body=""
	I1213 18:40:10.081454   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:10.081809   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:10.081865   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:10.581615   38829 type.go:168] "Request Body" body=""
	I1213 18:40:10.581696   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:10.582036   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:11.080690   38829 type.go:168] "Request Body" body=""
	I1213 18:40:11.080762   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:11.081125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:11.580814   38829 type.go:168] "Request Body" body=""
	I1213 18:40:11.580891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:11.581233   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:12.080745   38829 type.go:168] "Request Body" body=""
	I1213 18:40:12.080820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:12.081174   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:12.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:40:12.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:12.581118   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:12.581177   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:13.080870   38829 type.go:168] "Request Body" body=""
	I1213 18:40:13.080953   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:13.081298   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:13.580990   38829 type.go:168] "Request Body" body=""
	I1213 18:40:13.581130   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:13.581452   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:14.081563   38829 type.go:168] "Request Body" body=""
	I1213 18:40:14.081631   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:14.081949   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:14.580642   38829 type.go:168] "Request Body" body=""
	I1213 18:40:14.580724   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:14.581092   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:15.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:40:15.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:15.081138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:15.081197   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:15.580905   38829 type.go:168] "Request Body" body=""
	I1213 18:40:15.580977   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:15.581270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:16.080728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:16.080801   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:16.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:16.580745   38829 type.go:168] "Request Body" body=""
	I1213 18:40:16.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:16.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:17.080854   38829 type.go:168] "Request Body" body=""
	I1213 18:40:17.080925   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:17.081196   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:17.081236   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:17.580885   38829 type.go:168] "Request Body" body=""
	I1213 18:40:17.580960   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:17.581311   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:18.081048   38829 type.go:168] "Request Body" body=""
	I1213 18:40:18.081128   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:18.081456   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:18.581421   38829 type.go:168] "Request Body" body=""
	I1213 18:40:18.581495   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:18.581752   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:19.081269   38829 type.go:168] "Request Body" body=""
	I1213 18:40:19.081345   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:19.081667   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:19.081723   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:19.581465   38829 type.go:168] "Request Body" body=""
	I1213 18:40:19.581546   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:19.581834   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:20.081620   38829 type.go:168] "Request Body" body=""
	I1213 18:40:20.081707   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:20.082023   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:20.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:40:20.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:20.581185   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:21.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:40:21.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:21.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:21.580880   38829 type.go:168] "Request Body" body=""
	I1213 18:40:21.580954   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:21.581229   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:21.581273   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:22.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:40:22.080802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:22.081186   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:22.580892   38829 type.go:168] "Request Body" body=""
	I1213 18:40:22.580971   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:22.581314   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:23.080852   38829 type.go:168] "Request Body" body=""
	I1213 18:40:23.080921   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:23.081254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:23.580738   38829 type.go:168] "Request Body" body=""
	I1213 18:40:23.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:23.581213   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:24.080992   38829 type.go:168] "Request Body" body=""
	I1213 18:40:24.081086   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:24.081439   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:24.081493   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:24.581181   38829 type.go:168] "Request Body" body=""
	I1213 18:40:24.581254   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:24.581518   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:25.081519   38829 type.go:168] "Request Body" body=""
	I1213 18:40:25.081638   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:25.082066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:25.580956   38829 type.go:168] "Request Body" body=""
	I1213 18:40:25.581049   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:25.581403   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:26.081103   38829 type.go:168] "Request Body" body=""
	I1213 18:40:26.081188   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:26.081496   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:26.081544   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:26.581271   38829 type.go:168] "Request Body" body=""
	I1213 18:40:26.581346   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:26.581679   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:27.081463   38829 type.go:168] "Request Body" body=""
	I1213 18:40:27.081544   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:27.081845   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:27.581582   38829 type.go:168] "Request Body" body=""
	I1213 18:40:27.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:27.581970   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:28.080670   38829 type.go:168] "Request Body" body=""
	I1213 18:40:28.080746   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:28.081095   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:28.580759   38829 type.go:168] "Request Body" body=""
	I1213 18:40:28.580833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:28.581189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:28.581244   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:29.080966   38829 type.go:168] "Request Body" body=""
	I1213 18:40:29.081057   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:29.081325   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:29.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:29.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:29.581235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:30.080981   38829 type.go:168] "Request Body" body=""
	I1213 18:40:30.081106   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:30.081499   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:30.581288   38829 type.go:168] "Request Body" body=""
	I1213 18:40:30.581365   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:30.581686   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:30.581744   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:31.081563   38829 type.go:168] "Request Body" body=""
	I1213 18:40:31.081643   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:31.081985   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:31.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:40:31.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:31.581128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:32.080686   38829 type.go:168] "Request Body" body=""
	I1213 18:40:32.080759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:32.081089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:32.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:40:32.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:32.581153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:33.080697   38829 type.go:168] "Request Body" body=""
	I1213 18:40:33.080771   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:33.081078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:33.081125   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:33.580695   38829 type.go:168] "Request Body" body=""
	I1213 18:40:33.580776   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:33.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:34.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:40:34.080785   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:34.081116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:34.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:34.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:34.581135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:35.080858   38829 type.go:168] "Request Body" body=""
	I1213 18:40:35.080940   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:35.081258   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:35.081316   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:35.580736   38829 type.go:168] "Request Body" body=""
	I1213 18:40:35.580819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:35.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:36.080905   38829 type.go:168] "Request Body" body=""
	I1213 18:40:36.080982   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:36.081405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:36.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:40:36.580780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:36.581071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:37.080758   38829 type.go:168] "Request Body" body=""
	I1213 18:40:37.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:37.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:37.580742   38829 type.go:168] "Request Body" body=""
	I1213 18:40:37.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:37.581185   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:37.581240   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:38.080845   38829 type.go:168] "Request Body" body=""
	I1213 18:40:38.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:38.081284   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:38.580992   38829 type.go:168] "Request Body" body=""
	I1213 18:40:38.581079   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:38.581427   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:39.081037   38829 type.go:168] "Request Body" body=""
	I1213 18:40:39.081109   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:39.081425   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:39.580691   38829 type.go:168] "Request Body" body=""
	I1213 18:40:39.580779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:39.581096   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:40.080864   38829 type.go:168] "Request Body" body=""
	I1213 18:40:40.080952   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:40.081316   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:40.081370   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:40.581072   38829 type.go:168] "Request Body" body=""
	I1213 18:40:40.581147   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:40.581455   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:41.080649   38829 type.go:168] "Request Body" body=""
	I1213 18:40:41.080720   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:41.080968   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:41.580717   38829 type.go:168] "Request Body" body=""
	I1213 18:40:41.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:41.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:42.080793   38829 type.go:168] "Request Body" body=""
	I1213 18:40:42.080889   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:42.081224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:42.580774   38829 type.go:168] "Request Body" body=""
	I1213 18:40:42.580846   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:42.581129   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:42.581171   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:43.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:40:43.080889   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:43.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:43.580912   38829 type.go:168] "Request Body" body=""
	I1213 18:40:43.581022   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:43.581350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:44.081100   38829 type.go:168] "Request Body" body=""
	I1213 18:40:44.081184   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:44.081466   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:44.581295   38829 type.go:168] "Request Body" body=""
	I1213 18:40:44.581368   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:44.581680   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:44.581735   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:45.081574   38829 type.go:168] "Request Body" body=""
	I1213 18:40:45.081671   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:45.082057   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:45.580753   38829 type.go:168] "Request Body" body=""
	I1213 18:40:45.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:45.581123   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:46.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:40:46.080807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:46.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:46.580875   38829 type.go:168] "Request Body" body=""
	I1213 18:40:46.580954   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:46.581347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:47.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:40:47.080843   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:47.081169   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:47.081222   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:47.580721   38829 type.go:168] "Request Body" body=""
	I1213 18:40:47.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:47.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:48.080733   38829 type.go:168] "Request Body" body=""
	I1213 18:40:48.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:48.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:48.581574   38829 type.go:168] "Request Body" body=""
	I1213 18:40:48.581646   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:48.581923   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:49.080895   38829 type.go:168] "Request Body" body=""
	I1213 18:40:49.080969   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:49.081284   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:49.081332   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:49.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:49.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:49.581189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:50.080877   38829 type.go:168] "Request Body" body=""
	I1213 18:40:50.080951   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:50.081313   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:50.580740   38829 type.go:168] "Request Body" body=""
	I1213 18:40:50.580817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:50.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:51.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:40:51.080811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:51.081140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:51.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:51.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:51.581094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:51.581147   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:52.080738   38829 type.go:168] "Request Body" body=""
	I1213 18:40:52.080814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:52.081156   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:52.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:40:52.580781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:52.581124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:53.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:53.080737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:53.081101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:53.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:53.580737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:53.581073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:54.081075   38829 type.go:168] "Request Body" body=""
	I1213 18:40:54.081153   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:54.081490   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:54.081544   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:54.580688   38829 type.go:168] "Request Body" body=""
	I1213 18:40:54.580770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:54.581090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:55.080755   38829 type.go:168] "Request Body" body=""
	I1213 18:40:55.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:55.081218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:55.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:55.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:55.581128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:56.080828   38829 type.go:168] "Request Body" body=""
	I1213 18:40:56.080907   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:56.081254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:56.580945   38829 type.go:168] "Request Body" body=""
	I1213 18:40:56.581061   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:56.581383   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:56.581438   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:57.081145   38829 type.go:168] "Request Body" body=""
	I1213 18:40:57.081219   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:57.081499   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:57.581369   38829 type.go:168] "Request Body" body=""
	I1213 18:40:57.581461   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:57.581753   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:58.081564   38829 type.go:168] "Request Body" body=""
	I1213 18:40:58.081635   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:58.081964   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:58.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:40:58.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:58.581151   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:59.081182   38829 type.go:168] "Request Body" body=""
	I1213 18:40:59.081258   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:59.081514   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:59.081555   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:59.581349   38829 type.go:168] "Request Body" body=""
	I1213 18:40:59.581423   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:59.581720   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:00.081815   38829 type.go:168] "Request Body" body=""
	I1213 18:41:00.081903   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:00.082221   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:00.581646   38829 type.go:168] "Request Body" body=""
	I1213 18:41:00.581716   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:00.582021   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:01.080712   38829 type.go:168] "Request Body" body=""
	I1213 18:41:01.080792   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:01.081087   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:01.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:41:01.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:01.581320   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:01.581376   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:02.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:41:02.080888   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:02.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:02.580849   38829 type.go:168] "Request Body" body=""
	I1213 18:41:02.580920   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:02.581274   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:03.080853   38829 type.go:168] "Request Body" body=""
	I1213 18:41:03.080929   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:03.081297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:03.580687   38829 type.go:168] "Request Body" body=""
	I1213 18:41:03.580761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:03.581113   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:04.080818   38829 type.go:168] "Request Body" body=""
	I1213 18:41:04.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:04.081231   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:04.081279   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:04.580784   38829 type.go:168] "Request Body" body=""
	I1213 18:41:04.580861   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:04.581254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:05.080702   38829 type.go:168] "Request Body" body=""
	I1213 18:41:05.080774   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:05.081067   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:05.580726   38829 type.go:168] "Request Body" body=""
	I1213 18:41:05.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:05.581149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:06.080754   38829 type.go:168] "Request Body" body=""
	I1213 18:41:06.080824   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:06.081183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:06.580809   38829 type.go:168] "Request Body" body=""
	I1213 18:41:06.580876   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:06.581193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:06.581275   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:07.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:41:07.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:07.081155   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:07.580864   38829 type.go:168] "Request Body" body=""
	I1213 18:41:07.580935   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:07.581293   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:08.080815   38829 type.go:168] "Request Body" body=""
	I1213 18:41:08.080882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:08.081228   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:08.581184   38829 type.go:168] "Request Body" body=""
	I1213 18:41:08.581267   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:08.581600   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:08.581650   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:09.081329   38829 type.go:168] "Request Body" body=""
	I1213 18:41:09.081400   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:09.081701   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:09.581386   38829 type.go:168] "Request Body" body=""
	I1213 18:41:09.581459   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:09.581736   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:10.081624   38829 type.go:168] "Request Body" body=""
	I1213 18:41:10.081709   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:10.082054   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:10.580758   38829 type.go:168] "Request Body" body=""
	I1213 18:41:10.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:10.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:11.080690   38829 type.go:168] "Request Body" body=""
	I1213 18:41:11.080767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:11.081130   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:11.081225   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:11.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:41:11.580838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:11.581297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:12.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:41:12.081129   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:12.081449   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:12.581247   38829 type.go:168] "Request Body" body=""
	I1213 18:41:12.581315   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:12.581576   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:13.080944   38829 type.go:168] "Request Body" body=""
	I1213 18:41:13.081031   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:13.081378   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:13.081435   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:13.580973   38829 type.go:168] "Request Body" body=""
	I1213 18:41:13.581116   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:13.581497   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:14.081648   38829 type.go:168] "Request Body" body=""
	I1213 18:41:14.081731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:14.082000   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:14.580709   38829 type.go:168] "Request Body" body=""
	I1213 18:41:14.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:14.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:15.080870   38829 type.go:168] "Request Body" body=""
	I1213 18:41:15.080947   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:15.081336   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:15.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:41:15.580729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:15.581047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:15.581086   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:16.080721   38829 type.go:168] "Request Body" body=""
	I1213 18:41:16.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:16.081148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:16.580760   38829 type.go:168] "Request Body" body=""
	I1213 18:41:16.580840   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:16.581166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:17.080685   38829 type.go:168] "Request Body" body=""
	I1213 18:41:17.080772   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:17.081106   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:17.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:17.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:17.581116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:17.581162   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:18.080745   38829 type.go:168] "Request Body" body=""
	I1213 18:41:18.080820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:18.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:18.581224   38829 type.go:168] "Request Body" body=""
	I1213 18:41:18.581296   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:18.581580   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:19.081352   38829 type.go:168] "Request Body" body=""
	I1213 18:41:19.081427   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:19.081734   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:19.581454   38829 type.go:168] "Request Body" body=""
	I1213 18:41:19.581571   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:19.581908   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:19.581960   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:20.081575   38829 type.go:168] "Request Body" body=""
	I1213 18:41:20.081653   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:20.081930   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:20.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:41:20.580722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:20.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:21.080807   38829 type.go:168] "Request Body" body=""
	I1213 18:41:21.080885   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:21.081222   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:21.580675   38829 type.go:168] "Request Body" body=""
	I1213 18:41:21.580755   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:21.581125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:22.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:41:22.080789   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:22.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:22.081174   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:22.580748   38829 type.go:168] "Request Body" body=""
	I1213 18:41:22.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:22.581169   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:23.080686   38829 type.go:168] "Request Body" body=""
	I1213 18:41:23.080758   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:23.081067   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:23.580652   38829 type.go:168] "Request Body" body=""
	I1213 18:41:23.580733   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:23.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:24.081615   38829 type.go:168] "Request Body" body=""
	I1213 18:41:24.081701   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:24.082028   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:24.082086   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:24.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:41:24.580790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:24.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:25.080723   38829 type.go:168] "Request Body" body=""
	I1213 18:41:25.080800   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:25.081135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:25.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:41:25.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:25.581183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:26.080778   38829 type.go:168] "Request Body" body=""
	I1213 18:41:26.080846   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:26.081178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:26.580887   38829 type.go:168] "Request Body" body=""
	I1213 18:41:26.580963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:26.581315   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:26.581370   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:27.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:41:27.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:27.081128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:27.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:27.580741   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:27.581056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:28.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:41:28.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:28.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:28.580902   38829 type.go:168] "Request Body" body=""
	I1213 18:41:28.580974   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:28.581301   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:29.080749   38829 type.go:168] "Request Body" body=""
	I1213 18:41:29.080817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:29.081091   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:29.081132   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:29.580839   38829 type.go:168] "Request Body" body=""
	I1213 18:41:29.580981   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:29.581329   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:30.080766   38829 type.go:168] "Request Body" body=""
	I1213 18:41:30.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:30.081270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:30.580990   38829 type.go:168] "Request Body" body=""
	I1213 18:41:30.581076   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:30.581343   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:31.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:41:31.080787   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:31.081149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:31.081200   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:31.580852   38829 type.go:168] "Request Body" body=""
	I1213 18:41:31.580935   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:31.581309   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:32.080976   38829 type.go:168] "Request Body" body=""
	I1213 18:41:32.081071   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:32.081376   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:32.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:41:32.580812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:32.581179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:33.080899   38829 type.go:168] "Request Body" body=""
	I1213 18:41:33.080979   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:33.081353   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:33.081413   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:33.580694   38829 type.go:168] "Request Body" body=""
	I1213 18:41:33.580774   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:33.581069   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:34.081613   38829 type.go:168] "Request Body" body=""
	I1213 18:41:34.081689   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:34.082033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:34.580727   38829 type.go:168] "Request Body" body=""
	I1213 18:41:34.580828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:34.581146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:35.080790   38829 type.go:168] "Request Body" body=""
	I1213 18:41:35.080863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:35.081157   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:35.580696   38829 type.go:168] "Request Body" body=""
	I1213 18:41:35.580790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:35.581078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:35.581121   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:36.080756   38829 type.go:168] "Request Body" body=""
	I1213 18:41:36.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:36.081282   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:36.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:36.580739   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:36.581032   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:37.080757   38829 type.go:168] "Request Body" body=""
	I1213 18:41:37.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:37.081179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:37.580859   38829 type.go:168] "Request Body" body=""
	I1213 18:41:37.580931   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:37.581253   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:37.581299   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:38.080940   38829 type.go:168] "Request Body" body=""
	I1213 18:41:38.081033   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:38.081302   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:38.581248   38829 type.go:168] "Request Body" body=""
	I1213 18:41:38.581332   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:38.581671   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:39.081578   38829 type.go:168] "Request Body" body=""
	I1213 18:41:39.081659   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:39.081987   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:39.580653   38829 type.go:168] "Request Body" body=""
	I1213 18:41:39.580729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:39.581076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:40.080757   38829 type.go:168] "Request Body" body=""
	I1213 18:41:40.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:40.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:40.081257   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:40.580739   38829 type.go:168] "Request Body" body=""
	I1213 18:41:40.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:40.581120   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:41.080675   38829 type.go:168] "Request Body" body=""
	I1213 18:41:41.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:41.081085   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:41.580789   38829 type.go:168] "Request Body" body=""
	I1213 18:41:41.580862   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:41.581170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:42.080802   38829 type.go:168] "Request Body" body=""
	I1213 18:41:42.080877   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:42.081216   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:42.580919   38829 type.go:168] "Request Body" body=""
	I1213 18:41:42.580994   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:42.581286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:42.581339   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:43.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:41:43.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:43.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:43.580933   38829 type.go:168] "Request Body" body=""
	I1213 18:41:43.581025   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:43.581344   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:44.081112   38829 type.go:168] "Request Body" body=""
	I1213 18:41:44.081178   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:44.081445   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:44.581279   38829 type.go:168] "Request Body" body=""
	I1213 18:41:44.581350   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:44.581653   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:44.581708   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:45.081520   38829 type.go:168] "Request Body" body=""
	I1213 18:41:45.081600   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:45.081937   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:45.580652   38829 type.go:168] "Request Body" body=""
	I1213 18:41:45.580731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:45.581051   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:46.080751   38829 type.go:168] "Request Body" body=""
	I1213 18:41:46.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:46.081265   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:46.580968   38829 type.go:168] "Request Body" body=""
	I1213 18:41:46.581065   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:46.581388   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:47.080619   38829 type.go:168] "Request Body" body=""
	I1213 18:41:47.080685   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:47.080942   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:47.080980   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:47.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:47.580743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:47.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:48.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:41:48.080842   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:48.081166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:48.581104   38829 type.go:168] "Request Body" body=""
	I1213 18:41:48.581172   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:48.581434   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:49.081502   38829 type.go:168] "Request Body" body=""
	I1213 18:41:49.081574   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:49.081903   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:49.081968   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:49.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:41:49.580722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:49.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:50.080709   38829 type.go:168] "Request Body" body=""
	I1213 18:41:50.080785   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:50.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:50.580720   38829 type.go:168] "Request Body" body=""
	I1213 18:41:50.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:50.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:51.080888   38829 type.go:168] "Request Body" body=""
	I1213 18:41:51.080963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:51.081279   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:51.580674   38829 type.go:168] "Request Body" body=""
	I1213 18:41:51.580740   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:51.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:51.581128   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:52.080773   38829 type.go:168] "Request Body" body=""
	I1213 18:41:52.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:52.081249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:52.580793   38829 type.go:168] "Request Body" body=""
	I1213 18:41:52.580867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:52.581218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:53.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:41:53.080781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:53.081080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:53.580683   38829 type.go:168] "Request Body" body=""
	I1213 18:41:53.580763   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:53.581106   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:53.581159   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:54.080735   38829 type.go:168] "Request Body" body=""
	I1213 18:41:54.080815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:54.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:54.580662   38829 type.go:168] "Request Body" body=""
	I1213 18:41:54.580733   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:54.581088   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:55.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:55.080791   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:55.081154   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:55.580764   38829 type.go:168] "Request Body" body=""
	I1213 18:41:55.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:55.581137   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:55.581182   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:56.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:41:56.080790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:56.081130   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:56.580729   38829 type.go:168] "Request Body" body=""
	I1213 18:41:56.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:56.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:57.080852   38829 type.go:168] "Request Body" body=""
	I1213 18:41:57.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:57.081256   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:57.580921   38829 type.go:168] "Request Body" body=""
	I1213 18:41:57.581000   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:57.581269   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:57.581307   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:58.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:41:58.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:58.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:58.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:58.580799   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:58.581146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:59.081521   38829 type.go:168] "Request Body" body=""
	I1213 18:41:59.081580   38829 node_ready.go:38] duration metric: took 6m0.001077775s for node "functional-752103" to be "Ready" ...
	I1213 18:41:59.084666   38829 out.go:203] 
	W1213 18:41:59.087601   38829 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 18:41:59.087625   38829 out.go:285] * 
	W1213 18:41:59.089766   38829 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:41:59.092666   38829 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 18:42:08 functional-752103 crio[5390]: time="2025-12-13T18:42:08.12832746Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c755762e-8c02-4272-8897-bf6f4c3f3299 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.180193084Z" level=info msg="Checking image status: minikube-local-cache-test:functional-752103" id=d8c01cf5-8f87-4579-830a-467c9aa59a43 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.180374213Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.180418472Z" level=info msg="Image minikube-local-cache-test:functional-752103 not found" id=d8c01cf5-8f87-4579-830a-467c9aa59a43 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.180487863Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-752103 found" id=d8c01cf5-8f87-4579-830a-467c9aa59a43 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.204345068Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-752103" id=39116f54-1c25-4019-8682-c21aa17467f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.204482373Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-752103 not found" id=39116f54-1c25-4019-8682-c21aa17467f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.204523202Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-752103 found" id=39116f54-1c25-4019-8682-c21aa17467f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.231258723Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-752103" id=9b2633cd-0a2d-4c6e-bd18-f94a6181518d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.23139296Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-752103 not found" id=9b2633cd-0a2d-4c6e-bd18-f94a6181518d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.231456238Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-752103 found" id=9b2633cd-0a2d-4c6e-bd18-f94a6181518d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:10 functional-752103 crio[5390]: time="2025-12-13T18:42:10.200775437Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=52ab8407-ea44-4259-b544-9f05df9b2f6e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:10 functional-752103 crio[5390]: time="2025-12-13T18:42:10.539004038Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5fab9b18-e673-445d-a76a-ddec399764c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:10 functional-752103 crio[5390]: time="2025-12-13T18:42:10.539143567Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5fab9b18-e673-445d-a76a-ddec399764c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:10 functional-752103 crio[5390]: time="2025-12-13T18:42:10.539178865Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5fab9b18-e673-445d-a76a-ddec399764c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.116424968Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=eb444054-ebd3-4c5e-b1a8-680cdcf483d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.116556234Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=eb444054-ebd3-4c5e-b1a8-680cdcf483d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.116596883Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=eb444054-ebd3-4c5e-b1a8-680cdcf483d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.160984811Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1f2eaebb-7394-4c89-9ded-c81c523ae3bc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.161193846Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=1f2eaebb-7394-4c89-9ded-c81c523ae3bc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.161231433Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=1f2eaebb-7394-4c89-9ded-c81c523ae3bc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.21238756Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=95d12dfb-44bb-43ef-9e17-70d511fc828f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.212543114Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=95d12dfb-44bb-43ef-9e17-70d511fc828f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.212590704Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=95d12dfb-44bb-43ef-9e17-70d511fc828f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.759905226Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=63662edb-6e0f-4d27-af3a-ceaeebbb2a50 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:42:13.332821    9412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:13.337653    9412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:13.338354    9412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:13.340053    9412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:13.341211    9412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	[Dec13 18:42] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 18:42:13 up  1:24,  0 user,  load average: 0.54, 0.35, 0.44
	Linux functional-752103 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 18:42:11 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:42:11 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1145.
	Dec 13 18:42:11 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:11 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:11 functional-752103 kubelet[9309]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:11 functional-752103 kubelet[9309]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:11 functional-752103 kubelet[9309]: E1213 18:42:11.821290    9309 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:42:11 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:42:11 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:42:12 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1146.
	Dec 13 18:42:12 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:12 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:12 functional-752103 kubelet[9329]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:12 functional-752103 kubelet[9329]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:12 functional-752103 kubelet[9329]: E1213 18:42:12.661279    9329 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:42:12 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:42:12 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:42:13 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1147.
	Dec 13 18:42:13 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:13 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:13 functional-752103 kubelet[9417]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:13 functional-752103 kubelet[9417]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:13 functional-752103 kubelet[9417]: E1213 18:42:13.385386    9417 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:42:13 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:42:13 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103: exit status 2 (338.714061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-752103" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-752103 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-752103 get pods: exit status 1 (104.772978ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-752103 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-752103
helpers_test.go:244: (dbg) docker inspect functional-752103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	        "Created": "2025-12-13T18:27:36.869398923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:27:36.933863328Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hosts",
	        "LogPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b-json.log",
	        "Name": "/functional-752103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-752103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-752103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	                "LowerDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-752103",
	                "Source": "/var/lib/docker/volumes/functional-752103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-752103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-752103",
	                "name.minikube.sigs.k8s.io": "functional-752103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625ea12887c8956887678f2408d6edd5b98f62bce458a6906f4f662a3001a53b",
	            "SandboxKey": "/var/run/docker/netns/625ea12887c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-752103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:2c:83:4a:30:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84df48e9f7dac8c6a1b67709e5eea216d99d3f16eb50b96c7f0e4a82b3193d56",
	                    "EndpointID": "e69b1f9610d40396647a2d78f0170c31b9cd8e641fc8465e742649cccee8e591",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-752103",
	                        "d72b547cdcc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103: exit status 2 (312.660585ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-752103 logs -n 25: (1.064600783s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-350101 image ls --format yaml --alsologtostderr                                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image ls --format short --alsologtostderr                                                                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh     │ functional-350101 ssh pgrep buildkitd                                                                                                             │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	│ image   │ functional-350101 image ls --format json --alsologtostderr                                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image ls --format table --alsologtostderr                                                                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image build -t localhost/my-image:functional-350101 testdata/build --alsologtostderr                                            │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image ls                                                                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ delete  │ -p functional-350101                                                                                                                              │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ start   │ -p functional-752103 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	│ start   │ -p functional-752103 --alsologtostderr -v=8                                                                                                       │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:35 UTC │                     │
	│ cache   │ functional-752103 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache add registry.k8s.io/pause:latest                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache add minikube-local-cache-test:functional-752103                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache delete minikube-local-cache-test:functional-752103                                                                        │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl images                                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │                     │
	│ cache   │ functional-752103 cache reload                                                                                                                    │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ kubectl │ functional-752103 kubectl -- --context functional-752103 get pods                                                                                 │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:35:53
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:35:53.999245   38829 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:35:53.999434   38829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:35:53.999464   38829 out.go:374] Setting ErrFile to fd 2...
	I1213 18:35:53.999486   38829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:35:53.999778   38829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:35:54.000250   38829 out.go:368] Setting JSON to false
	I1213 18:35:54.001308   38829 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4706,"bootTime":1765646248,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:35:54.001457   38829 start.go:143] virtualization:  
	I1213 18:35:54.010388   38829 out.go:179] * [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:35:54.014157   38829 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:35:54.014353   38829 notify.go:221] Checking for updates...
	I1213 18:35:54.020075   38829 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:35:54.023186   38829 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:54.026171   38829 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:35:54.029213   38829 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:35:54.032235   38829 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:35:54.035744   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:54.035909   38829 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:35:54.059624   38829 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:35:54.059744   38829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:35:54.127464   38829 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:35:54.118134446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:35:54.127571   38829 docker.go:319] overlay module found
	I1213 18:35:54.130605   38829 out.go:179] * Using the docker driver based on existing profile
	I1213 18:35:54.133521   38829 start.go:309] selected driver: docker
	I1213 18:35:54.133548   38829 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:54.133668   38829 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:35:54.133779   38829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:35:54.194306   38829 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:35:54.184244205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:35:54.194716   38829 cni.go:84] Creating CNI manager for ""
	I1213 18:35:54.194772   38829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:35:54.194827   38829 start.go:353] cluster config:
	{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:54.197953   38829 out.go:179] * Starting "functional-752103" primary control-plane node in "functional-752103" cluster
	I1213 18:35:54.200965   38829 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:35:54.203964   38829 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:35:54.207111   38829 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:35:54.207169   38829 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 18:35:54.207189   38829 cache.go:65] Caching tarball of preloaded images
	I1213 18:35:54.207200   38829 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:35:54.207268   38829 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 18:35:54.207278   38829 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 18:35:54.207380   38829 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/config.json ...
	I1213 18:35:54.226684   38829 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 18:35:54.226707   38829 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 18:35:54.226736   38829 cache.go:243] Successfully downloaded all kic artifacts
	I1213 18:35:54.226765   38829 start.go:360] acquireMachinesLock for functional-752103: {Name:mkf4ec1d9e1836ef54983db4562aedfd1a9c51c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 18:35:54.226834   38829 start.go:364] duration metric: took 45.136µs to acquireMachinesLock for "functional-752103"
	I1213 18:35:54.226856   38829 start.go:96] Skipping create...Using existing machine configuration
	I1213 18:35:54.226865   38829 fix.go:54] fixHost starting: 
	I1213 18:35:54.227126   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:54.245088   38829 fix.go:112] recreateIfNeeded on functional-752103: state=Running err=<nil>
	W1213 18:35:54.245125   38829 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 18:35:54.248193   38829 out.go:252] * Updating the running docker "functional-752103" container ...
	I1213 18:35:54.248225   38829 machine.go:94] provisionDockerMachine start ...
	I1213 18:35:54.248302   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.265418   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.265750   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.265765   38829 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 18:35:54.412628   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:35:54.412654   38829 ubuntu.go:182] provisioning hostname "functional-752103"
	I1213 18:35:54.412716   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.431532   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.431834   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.431851   38829 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-752103 && echo "functional-752103" | sudo tee /etc/hostname
	I1213 18:35:54.592050   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:35:54.592214   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.614592   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:54.614908   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:54.614930   38829 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-752103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-752103/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-752103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 18:35:54.769516   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 18:35:54.769546   38829 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 18:35:54.769572   38829 ubuntu.go:190] setting up certificates
	I1213 18:35:54.769581   38829 provision.go:84] configureAuth start
	I1213 18:35:54.769640   38829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:35:54.787462   38829 provision.go:143] copyHostCerts
	I1213 18:35:54.787509   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:35:54.787551   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 18:35:54.787563   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:35:54.787650   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 18:35:54.787740   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:35:54.787760   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 18:35:54.787765   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:35:54.787800   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 18:35:54.787845   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:35:54.787868   38829 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 18:35:54.787877   38829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:35:54.787902   38829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 18:35:54.787955   38829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.functional-752103 san=[127.0.0.1 192.168.49.2 functional-752103 localhost minikube]
	I1213 18:35:54.878725   38829 provision.go:177] copyRemoteCerts
	I1213 18:35:54.878794   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 18:35:54.878839   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:54.895961   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.009601   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 18:35:55.009696   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 18:35:55.033852   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 18:35:55.033923   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 18:35:55.052749   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 18:35:55.052813   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 18:35:55.072069   38829 provision.go:87] duration metric: took 302.464055ms to configureAuth
	I1213 18:35:55.072107   38829 ubuntu.go:206] setting minikube options for container-runtime
	I1213 18:35:55.072313   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:55.072426   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.092406   38829 main.go:143] libmachine: Using SSH client type: native
	I1213 18:35:55.092745   38829 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:35:55.092771   38829 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 18:35:55.413226   38829 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 18:35:55.413251   38829 machine.go:97] duration metric: took 1.16501875s to provisionDockerMachine
	I1213 18:35:55.413264   38829 start.go:293] postStartSetup for "functional-752103" (driver="docker")
	I1213 18:35:55.413300   38829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 18:35:55.413403   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 18:35:55.413470   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.430709   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.537093   38829 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 18:35:55.540324   38829 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 18:35:55.540345   38829 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 18:35:55.540349   38829 command_runner.go:130] > VERSION_ID="12"
	I1213 18:35:55.540354   38829 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 18:35:55.540359   38829 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 18:35:55.540363   38829 command_runner.go:130] > ID=debian
	I1213 18:35:55.540368   38829 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 18:35:55.540373   38829 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 18:35:55.540379   38829 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 18:35:55.540743   38829 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 18:35:55.540767   38829 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 18:35:55.540779   38829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 18:35:55.540839   38829 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 18:35:55.540926   38829 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 18:35:55.540938   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 18:35:55.541035   38829 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> hosts in /etc/test/nested/copy/4637
	I1213 18:35:55.541044   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> /etc/test/nested/copy/4637/hosts
	I1213 18:35:55.541087   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4637
	I1213 18:35:55.548955   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:35:55.566460   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts --> /etc/test/nested/copy/4637/hosts (40 bytes)
	I1213 18:35:55.584163   38829 start.go:296] duration metric: took 170.869499ms for postStartSetup
	I1213 18:35:55.584240   38829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 18:35:55.584294   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.601966   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.706486   38829 command_runner.go:130] > 11%
	I1213 18:35:55.706569   38829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 18:35:55.711597   38829 command_runner.go:130] > 174G
	I1213 18:35:55.711643   38829 fix.go:56] duration metric: took 1.484775946s for fixHost
	I1213 18:35:55.711654   38829 start.go:83] releasing machines lock for "functional-752103", held for 1.484809349s
	I1213 18:35:55.711733   38829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:35:55.731505   38829 ssh_runner.go:195] Run: cat /version.json
	I1213 18:35:55.731524   38829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 18:35:55.731557   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.731578   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:55.752781   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.757282   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:55.945606   38829 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 18:35:55.945674   38829 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 18:35:55.945816   38829 ssh_runner.go:195] Run: systemctl --version
	I1213 18:35:55.951961   38829 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 18:35:55.951999   38829 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 18:35:55.952322   38829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 18:35:55.992229   38829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 18:35:56.001527   38829 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 18:35:56.001762   38829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 18:35:56.001849   38829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 18:35:56.014010   38829 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 18:35:56.014037   38829 start.go:496] detecting cgroup driver to use...
	I1213 18:35:56.014094   38829 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 18:35:56.014182   38829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 18:35:56.030879   38829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 18:35:56.046797   38829 docker.go:218] disabling cri-docker service (if available) ...
	I1213 18:35:56.046882   38829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 18:35:56.067384   38829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 18:35:56.080815   38829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 18:35:56.192099   38829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 18:35:56.317541   38829 docker.go:234] disabling docker service ...
	I1213 18:35:56.317693   38829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 18:35:56.332696   38829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 18:35:56.345912   38829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 18:35:56.463560   38829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 18:35:56.579100   38829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 18:35:56.592582   38829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 18:35:56.605285   38829 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 18:35:56.606432   38829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 18:35:56.606495   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.615251   38829 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 18:35:56.615329   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.624699   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.633587   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.642744   38829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 18:35:56.651128   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.660108   38829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.669661   38829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:56.678839   38829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 18:35:56.685773   38829 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 18:35:56.686744   38829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 18:35:56.694432   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:56.830483   38829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 18:35:57.005048   38829 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 18:35:57.005450   38829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 18:35:57.010285   38829 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 18:35:57.010309   38829 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 18:35:57.010316   38829 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1213 18:35:57.010333   38829 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 18:35:57.010338   38829 command_runner.go:130] > Access: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010348   38829 command_runner.go:130] > Modify: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010355   38829 command_runner.go:130] > Change: 2025-12-13 18:35:56.944672058 +0000
	I1213 18:35:57.010364   38829 command_runner.go:130] >  Birth: -
	I1213 18:35:57.010406   38829 start.go:564] Will wait 60s for crictl version
	I1213 18:35:57.010459   38829 ssh_runner.go:195] Run: which crictl
	I1213 18:35:57.014231   38829 command_runner.go:130] > /usr/local/bin/crictl
	I1213 18:35:57.014339   38829 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 18:35:57.039763   38829 command_runner.go:130] > Version:  0.1.0
	I1213 18:35:57.039785   38829 command_runner.go:130] > RuntimeName:  cri-o
	I1213 18:35:57.039789   38829 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 18:35:57.039795   38829 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 18:35:57.039807   38829 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 18:35:57.039886   38829 ssh_runner.go:195] Run: crio --version
	I1213 18:35:57.067200   38829 command_runner.go:130] > crio version 1.34.3
	I1213 18:35:57.067289   38829 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 18:35:57.067311   38829 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 18:35:57.067352   38829 command_runner.go:130] >    GitTreeState:   dirty
	I1213 18:35:57.067376   38829 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 18:35:57.067397   38829 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 18:35:57.067430   38829 command_runner.go:130] >    Compiler:       gc
	I1213 18:35:57.067455   38829 command_runner.go:130] >    Platform:       linux/arm64
	I1213 18:35:57.067476   38829 command_runner.go:130] >    Linkmode:       static
	I1213 18:35:57.067513   38829 command_runner.go:130] >    BuildTags:
	I1213 18:35:57.067537   38829 command_runner.go:130] >      static
	I1213 18:35:57.067557   38829 command_runner.go:130] >      netgo
	I1213 18:35:57.067592   38829 command_runner.go:130] >      osusergo
	I1213 18:35:57.067614   38829 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 18:35:57.067632   38829 command_runner.go:130] >      seccomp
	I1213 18:35:57.067651   38829 command_runner.go:130] >      apparmor
	I1213 18:35:57.067685   38829 command_runner.go:130] >      selinux
	I1213 18:35:57.067706   38829 command_runner.go:130] >    LDFlags:          unknown
	I1213 18:35:57.067726   38829 command_runner.go:130] >    SeccompEnabled:   true
	I1213 18:35:57.067760   38829 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 18:35:57.069374   38829 ssh_runner.go:195] Run: crio --version
	I1213 18:35:57.097856   38829 command_runner.go:130] > crio version 1.34.3
	I1213 18:35:57.097937   38829 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 18:35:57.097971   38829 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 18:35:57.098005   38829 command_runner.go:130] >    GitTreeState:   dirty
	I1213 18:35:57.098025   38829 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 18:35:57.098058   38829 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 18:35:57.098082   38829 command_runner.go:130] >    Compiler:       gc
	I1213 18:35:57.098103   38829 command_runner.go:130] >    Platform:       linux/arm64
	I1213 18:35:57.098156   38829 command_runner.go:130] >    Linkmode:       static
	I1213 18:35:57.098180   38829 command_runner.go:130] >    BuildTags:
	I1213 18:35:57.098200   38829 command_runner.go:130] >      static
	I1213 18:35:57.098234   38829 command_runner.go:130] >      netgo
	I1213 18:35:57.098253   38829 command_runner.go:130] >      osusergo
	I1213 18:35:57.098277   38829 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 18:35:57.098306   38829 command_runner.go:130] >      seccomp
	I1213 18:35:57.098328   38829 command_runner.go:130] >      apparmor
	I1213 18:35:57.098348   38829 command_runner.go:130] >      selinux
	I1213 18:35:57.098384   38829 command_runner.go:130] >    LDFlags:          unknown
	I1213 18:35:57.098407   38829 command_runner.go:130] >    SeccompEnabled:   true
	I1213 18:35:57.098425   38829 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 18:35:57.103998   38829 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 18:35:57.106795   38829 cli_runner.go:164] Run: docker network inspect functional-752103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:35:57.122531   38829 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 18:35:57.126557   38829 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 18:35:57.126659   38829 kubeadm.go:884] updating cluster {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 18:35:57.126789   38829 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:35:57.126855   38829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:35:57.159258   38829 command_runner.go:130] > {
	I1213 18:35:57.159281   38829 command_runner.go:130] >   "images":  [
	I1213 18:35:57.159286   38829 command_runner.go:130] >     {
	I1213 18:35:57.159295   38829 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 18:35:57.159299   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159305   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 18:35:57.159309   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159312   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159321   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 18:35:57.159333   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 18:35:57.159349   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159354   38829 command_runner.go:130] >       "size":  "111333938",
	I1213 18:35:57.159358   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159370   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159373   38829 command_runner.go:130] >     },
	I1213 18:35:57.159376   38829 command_runner.go:130] >     {
	I1213 18:35:57.159382   38829 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 18:35:57.159389   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159394   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 18:35:57.159398   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159402   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159410   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 18:35:57.159421   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 18:35:57.159425   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159429   38829 command_runner.go:130] >       "size":  "29037500",
	I1213 18:35:57.159435   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159443   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159450   38829 command_runner.go:130] >     },
	I1213 18:35:57.159453   38829 command_runner.go:130] >     {
	I1213 18:35:57.159459   38829 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 18:35:57.159466   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159471   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 18:35:57.159474   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159481   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159489   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 18:35:57.159500   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 18:35:57.159504   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159508   38829 command_runner.go:130] >       "size":  "74491780",
	I1213 18:35:57.159514   38829 command_runner.go:130] >       "username":  "nonroot",
	I1213 18:35:57.159519   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159526   38829 command_runner.go:130] >     },
	I1213 18:35:57.159529   38829 command_runner.go:130] >     {
	I1213 18:35:57.159536   38829 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 18:35:57.159548   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159554   38829 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 18:35:57.159560   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159564   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159572   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 18:35:57.159582   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 18:35:57.159586   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159596   38829 command_runner.go:130] >       "size":  "60857170",
	I1213 18:35:57.159600   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159604   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159607   38829 command_runner.go:130] >       },
	I1213 18:35:57.159618   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159626   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159629   38829 command_runner.go:130] >     },
	I1213 18:35:57.159633   38829 command_runner.go:130] >     {
	I1213 18:35:57.159646   38829 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 18:35:57.159650   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159655   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 18:35:57.159661   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159665   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159673   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 18:35:57.159684   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 18:35:57.159687   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159691   38829 command_runner.go:130] >       "size":  "84949999",
	I1213 18:35:57.159697   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159701   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159706   38829 command_runner.go:130] >       },
	I1213 18:35:57.159710   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159720   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159723   38829 command_runner.go:130] >     },
	I1213 18:35:57.159726   38829 command_runner.go:130] >     {
	I1213 18:35:57.159733   38829 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 18:35:57.159740   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159750   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 18:35:57.159756   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159762   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159771   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 18:35:57.159782   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 18:35:57.159786   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159790   38829 command_runner.go:130] >       "size":  "72170325",
	I1213 18:35:57.159794   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159800   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159804   38829 command_runner.go:130] >       },
	I1213 18:35:57.159810   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159814   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159820   38829 command_runner.go:130] >     },
	I1213 18:35:57.159823   38829 command_runner.go:130] >     {
	I1213 18:35:57.159829   38829 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 18:35:57.159836   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159841   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 18:35:57.159847   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159851   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159859   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 18:35:57.159870   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 18:35:57.159874   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159878   38829 command_runner.go:130] >       "size":  "74106775",
	I1213 18:35:57.159882   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.159888   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.159892   38829 command_runner.go:130] >     },
	I1213 18:35:57.159897   38829 command_runner.go:130] >     {
	I1213 18:35:57.159904   38829 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 18:35:57.159910   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.159916   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 18:35:57.159926   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159934   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.159942   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 18:35:57.159966   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 18:35:57.159973   38829 command_runner.go:130] >       ],
	I1213 18:35:57.159977   38829 command_runner.go:130] >       "size":  "49822549",
	I1213 18:35:57.159981   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.159985   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.159991   38829 command_runner.go:130] >       },
	I1213 18:35:57.159995   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.160003   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.160008   38829 command_runner.go:130] >     },
	I1213 18:35:57.160011   38829 command_runner.go:130] >     {
	I1213 18:35:57.160017   38829 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 18:35:57.160025   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.160030   38829 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.160033   38829 command_runner.go:130] >       ],
	I1213 18:35:57.160040   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.160048   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 18:35:57.160059   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 18:35:57.160063   38829 command_runner.go:130] >       ],
	I1213 18:35:57.160067   38829 command_runner.go:130] >       "size":  "519884",
	I1213 18:35:57.160070   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.160077   38829 command_runner.go:130] >         "value":  "65535"
	I1213 18:35:57.160080   38829 command_runner.go:130] >       },
	I1213 18:35:57.160084   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.160093   38829 command_runner.go:130] >       "pinned":  true
	I1213 18:35:57.160096   38829 command_runner.go:130] >     }
	I1213 18:35:57.160101   38829 command_runner.go:130] >   ]
	I1213 18:35:57.160112   38829 command_runner.go:130] > }
	I1213 18:35:57.162388   38829 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:35:57.162414   38829 crio.go:433] Images already preloaded, skipping extraction
	I1213 18:35:57.162470   38829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:35:57.186777   38829 command_runner.go:130] > {
	I1213 18:35:57.186796   38829 command_runner.go:130] >   "images":  [
	I1213 18:35:57.186801   38829 command_runner.go:130] >     {
	I1213 18:35:57.186817   38829 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 18:35:57.186822   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186828   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 18:35:57.186832   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186836   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186846   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 18:35:57.186854   38829 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 18:35:57.186857   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186861   38829 command_runner.go:130] >       "size":  "111333938",
	I1213 18:35:57.186865   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.186873   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.186877   38829 command_runner.go:130] >     },
	I1213 18:35:57.186880   38829 command_runner.go:130] >     {
	I1213 18:35:57.186886   38829 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 18:35:57.186890   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186895   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 18:35:57.186898   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186902   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186913   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 18:35:57.186921   38829 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 18:35:57.186928   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186933   38829 command_runner.go:130] >       "size":  "29037500",
	I1213 18:35:57.186936   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.186942   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.186945   38829 command_runner.go:130] >     },
	I1213 18:35:57.186948   38829 command_runner.go:130] >     {
	I1213 18:35:57.186954   38829 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 18:35:57.186958   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.186963   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 18:35:57.186966   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186970   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.186977   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 18:35:57.186985   38829 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 18:35:57.186992   38829 command_runner.go:130] >       ],
	I1213 18:35:57.186996   38829 command_runner.go:130] >       "size":  "74491780",
	I1213 18:35:57.187000   38829 command_runner.go:130] >       "username":  "nonroot",
	I1213 18:35:57.187004   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187007   38829 command_runner.go:130] >     },
	I1213 18:35:57.187009   38829 command_runner.go:130] >     {
	I1213 18:35:57.187016   38829 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 18:35:57.187020   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187024   38829 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 18:35:57.187029   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187033   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187041   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 18:35:57.187050   38829 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 18:35:57.187053   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187057   38829 command_runner.go:130] >       "size":  "60857170",
	I1213 18:35:57.187061   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187064   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187067   38829 command_runner.go:130] >       },
	I1213 18:35:57.187075   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187079   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187082   38829 command_runner.go:130] >     },
	I1213 18:35:57.187085   38829 command_runner.go:130] >     {
	I1213 18:35:57.187092   38829 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 18:35:57.187095   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187101   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 18:35:57.187104   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187108   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187115   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 18:35:57.187123   38829 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 18:35:57.187126   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187130   38829 command_runner.go:130] >       "size":  "84949999",
	I1213 18:35:57.187134   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187137   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187146   38829 command_runner.go:130] >       },
	I1213 18:35:57.187149   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187153   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187157   38829 command_runner.go:130] >     },
	I1213 18:35:57.187159   38829 command_runner.go:130] >     {
	I1213 18:35:57.187166   38829 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 18:35:57.187170   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187175   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 18:35:57.187178   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187182   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187190   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 18:35:57.187199   38829 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 18:35:57.187202   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187206   38829 command_runner.go:130] >       "size":  "72170325",
	I1213 18:35:57.187209   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187213   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187216   38829 command_runner.go:130] >       },
	I1213 18:35:57.187219   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187223   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187226   38829 command_runner.go:130] >     },
	I1213 18:35:57.187229   38829 command_runner.go:130] >     {
	I1213 18:35:57.187236   38829 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 18:35:57.187239   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187244   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 18:35:57.187247   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187251   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187258   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 18:35:57.187266   38829 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 18:35:57.187269   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187273   38829 command_runner.go:130] >       "size":  "74106775",
	I1213 18:35:57.187277   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187280   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187283   38829 command_runner.go:130] >     },
	I1213 18:35:57.187291   38829 command_runner.go:130] >     {
	I1213 18:35:57.187297   38829 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 18:35:57.187300   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187306   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 18:35:57.187309   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187313   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187321   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 18:35:57.187337   38829 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 18:35:57.187340   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187344   38829 command_runner.go:130] >       "size":  "49822549",
	I1213 18:35:57.187348   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187352   38829 command_runner.go:130] >         "value":  "0"
	I1213 18:35:57.187355   38829 command_runner.go:130] >       },
	I1213 18:35:57.187358   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187362   38829 command_runner.go:130] >       "pinned":  false
	I1213 18:35:57.187364   38829 command_runner.go:130] >     },
	I1213 18:35:57.187367   38829 command_runner.go:130] >     {
	I1213 18:35:57.187374   38829 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 18:35:57.187378   38829 command_runner.go:130] >       "repoTags":  [
	I1213 18:35:57.187382   38829 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.187385   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187389   38829 command_runner.go:130] >       "repoDigests":  [
	I1213 18:35:57.187396   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 18:35:57.187404   38829 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 18:35:57.187407   38829 command_runner.go:130] >       ],
	I1213 18:35:57.187410   38829 command_runner.go:130] >       "size":  "519884",
	I1213 18:35:57.187414   38829 command_runner.go:130] >       "uid":  {
	I1213 18:35:57.187417   38829 command_runner.go:130] >         "value":  "65535"
	I1213 18:35:57.187420   38829 command_runner.go:130] >       },
	I1213 18:35:57.187424   38829 command_runner.go:130] >       "username":  "",
	I1213 18:35:57.187428   38829 command_runner.go:130] >       "pinned":  true
	I1213 18:35:57.187431   38829 command_runner.go:130] >     }
	I1213 18:35:57.187434   38829 command_runner.go:130] >   ]
	I1213 18:35:57.187440   38829 command_runner.go:130] > }
	I1213 18:35:57.187570   38829 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:35:57.187578   38829 cache_images.go:86] Images are preloaded, skipping loading
	I1213 18:35:57.187585   38829 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 18:35:57.187672   38829 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-752103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 18:35:57.187756   38829 ssh_runner.go:195] Run: crio config
	I1213 18:35:57.235276   38829 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 18:35:57.235304   38829 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 18:35:57.235312   38829 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 18:35:57.235316   38829 command_runner.go:130] > #
	I1213 18:35:57.235323   38829 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 18:35:57.235330   38829 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 18:35:57.235336   38829 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 18:35:57.235344   38829 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 18:35:57.235351   38829 command_runner.go:130] > # reload'.
	I1213 18:35:57.235358   38829 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 18:35:57.235367   38829 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 18:35:57.235374   38829 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 18:35:57.235386   38829 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 18:35:57.235390   38829 command_runner.go:130] > [crio]
	I1213 18:35:57.235397   38829 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 18:35:57.235406   38829 command_runner.go:130] > # containers images, in this directory.
	I1213 18:35:57.235421   38829 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 18:35:57.235432   38829 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 18:35:57.235437   38829 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 18:35:57.235445   38829 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 18:35:57.235452   38829 command_runner.go:130] > # imagestore = ""
	I1213 18:35:57.235458   38829 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 18:35:57.235468   38829 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 18:35:57.235475   38829 command_runner.go:130] > # storage_driver = "overlay"
	I1213 18:35:57.235481   38829 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 18:35:57.235491   38829 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 18:35:57.235495   38829 command_runner.go:130] > # storage_option = [
	I1213 18:35:57.235502   38829 command_runner.go:130] > # ]
	I1213 18:35:57.235511   38829 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 18:35:57.235518   38829 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 18:35:57.235533   38829 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 18:35:57.235539   38829 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 18:35:57.235547   38829 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 18:35:57.235554   38829 command_runner.go:130] > # always happen on a node reboot
	I1213 18:35:57.235660   38829 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 18:35:57.235692   38829 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 18:35:57.235700   38829 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 18:35:57.235705   38829 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 18:35:57.235710   38829 command_runner.go:130] > # version_file_persist = ""
	I1213 18:35:57.235718   38829 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 18:35:57.235727   38829 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 18:35:57.235730   38829 command_runner.go:130] > # internal_wipe = true
	I1213 18:35:57.235739   38829 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 18:35:57.235744   38829 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 18:35:57.235748   38829 command_runner.go:130] > # internal_repair = true
	I1213 18:35:57.235754   38829 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 18:35:57.235760   38829 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 18:35:57.235769   38829 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 18:35:57.235775   38829 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 18:35:57.235781   38829 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 18:35:57.235784   38829 command_runner.go:130] > [crio.api]
	I1213 18:35:57.235790   38829 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 18:35:57.235795   38829 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 18:35:57.235800   38829 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 18:35:57.235804   38829 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 18:35:57.235811   38829 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 18:35:57.235816   38829 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 18:35:57.235819   38829 command_runner.go:130] > # stream_port = "0"
	I1213 18:35:57.235824   38829 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 18:35:57.235828   38829 command_runner.go:130] > # stream_enable_tls = false
	I1213 18:35:57.235838   38829 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 18:35:57.235842   38829 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 18:35:57.235849   38829 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 18:35:57.235854   38829 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 18:35:57.235858   38829 command_runner.go:130] > # stream_tls_cert = ""
	I1213 18:35:57.235864   38829 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 18:35:57.235869   38829 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 18:35:57.235873   38829 command_runner.go:130] > # stream_tls_key = ""
	I1213 18:35:57.235880   38829 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 18:35:57.235886   38829 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 18:35:57.235892   38829 command_runner.go:130] > # automatically pick up the changes.
	I1213 18:35:57.235896   38829 command_runner.go:130] > # stream_tls_ca = ""
	I1213 18:35:57.235914   38829 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 18:35:57.235918   38829 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 18:35:57.235926   38829 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 18:35:57.235930   38829 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 18:35:57.235936   38829 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 18:35:57.235942   38829 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 18:35:57.235945   38829 command_runner.go:130] > [crio.runtime]
	I1213 18:35:57.235951   38829 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 18:35:57.235956   38829 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 18:35:57.235960   38829 command_runner.go:130] > # "nofile=1024:2048"
	I1213 18:35:57.235965   38829 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 18:35:57.235969   38829 command_runner.go:130] > # default_ulimits = [
	I1213 18:35:57.235972   38829 command_runner.go:130] > # ]
	I1213 18:35:57.235978   38829 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 18:35:57.236231   38829 command_runner.go:130] > # no_pivot = false
	I1213 18:35:57.236246   38829 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 18:35:57.236252   38829 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 18:35:57.236258   38829 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 18:35:57.236264   38829 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 18:35:57.236272   38829 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 18:35:57.236280   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 18:35:57.236292   38829 command_runner.go:130] > # conmon = ""
	I1213 18:35:57.236297   38829 command_runner.go:130] > # Cgroup setting for conmon
	I1213 18:35:57.236304   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 18:35:57.236308   38829 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 18:35:57.236314   38829 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 18:35:57.236320   38829 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 18:35:57.236335   38829 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 18:35:57.236339   38829 command_runner.go:130] > # conmon_env = [
	I1213 18:35:57.236342   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236348   38829 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 18:35:57.236353   38829 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 18:35:57.236358   38829 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 18:35:57.236362   38829 command_runner.go:130] > # default_env = [
	I1213 18:35:57.236365   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236370   38829 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 18:35:57.236378   38829 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 18:35:57.236386   38829 command_runner.go:130] > # selinux = false
	I1213 18:35:57.236397   38829 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 18:35:57.236405   38829 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 18:35:57.236415   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236419   38829 command_runner.go:130] > # seccomp_profile = ""
	I1213 18:35:57.236425   38829 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 18:35:57.236436   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236440   38829 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 18:35:57.236447   38829 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 18:35:57.236457   38829 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 18:35:57.236464   38829 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 18:35:57.236470   38829 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 18:35:57.236477   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236482   38829 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 18:35:57.236493   38829 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 18:35:57.236497   38829 command_runner.go:130] > # the cgroup blockio controller.
	I1213 18:35:57.236501   38829 command_runner.go:130] > # blockio_config_file = ""
	I1213 18:35:57.236512   38829 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 18:35:57.236519   38829 command_runner.go:130] > # blockio parameters.
	I1213 18:35:57.236524   38829 command_runner.go:130] > # blockio_reload = false
	I1213 18:35:57.236530   38829 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 18:35:57.236538   38829 command_runner.go:130] > # irqbalance daemon.
	I1213 18:35:57.236543   38829 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 18:35:57.236550   38829 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 18:35:57.236560   38829 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 18:35:57.236567   38829 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 18:35:57.236573   38829 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 18:35:57.236579   38829 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 18:35:57.236584   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.236589   38829 command_runner.go:130] > # rdt_config_file = ""
	I1213 18:35:57.236594   38829 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 18:35:57.236600   38829 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 18:35:57.236606   38829 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 18:35:57.236612   38829 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 18:35:57.236619   38829 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 18:35:57.236626   38829 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 18:35:57.236633   38829 command_runner.go:130] > # will be added.
	I1213 18:35:57.236637   38829 command_runner.go:130] > # default_capabilities = [
	I1213 18:35:57.236640   38829 command_runner.go:130] > # 	"CHOWN",
	I1213 18:35:57.236644   38829 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 18:35:57.236647   38829 command_runner.go:130] > # 	"FSETID",
	I1213 18:35:57.236650   38829 command_runner.go:130] > # 	"FOWNER",
	I1213 18:35:57.236653   38829 command_runner.go:130] > # 	"SETGID",
	I1213 18:35:57.236656   38829 command_runner.go:130] > # 	"SETUID",
	I1213 18:35:57.236674   38829 command_runner.go:130] > # 	"SETPCAP",
	I1213 18:35:57.236679   38829 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 18:35:57.236682   38829 command_runner.go:130] > # 	"KILL",
	I1213 18:35:57.236685   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236693   38829 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 18:35:57.236702   38829 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 18:35:57.236710   38829 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 18:35:57.236716   38829 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 18:35:57.236722   38829 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 18:35:57.236726   38829 command_runner.go:130] > default_sysctls = [
	I1213 18:35:57.236731   38829 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 18:35:57.236734   38829 command_runner.go:130] > ]
	I1213 18:35:57.236738   38829 command_runner.go:130] > # List of devices on the host that a
	I1213 18:35:57.236748   38829 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 18:35:57.236755   38829 command_runner.go:130] > # allowed_devices = [
	I1213 18:35:57.236758   38829 command_runner.go:130] > # 	"/dev/fuse",
	I1213 18:35:57.236762   38829 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 18:35:57.236772   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236777   38829 command_runner.go:130] > # List of additional devices. specified as
	I1213 18:35:57.236784   38829 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 18:35:57.236794   38829 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 18:35:57.236800   38829 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 18:35:57.236804   38829 command_runner.go:130] > # additional_devices = [
	I1213 18:35:57.236832   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236837   38829 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 18:35:57.236841   38829 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 18:35:57.236844   38829 command_runner.go:130] > # 	"/etc/cdi",
	I1213 18:35:57.236848   38829 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 18:35:57.236854   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236861   38829 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 18:35:57.236870   38829 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 18:35:57.236874   38829 command_runner.go:130] > # Defaults to false.
	I1213 18:35:57.236880   38829 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 18:35:57.236891   38829 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 18:35:57.236898   38829 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 18:35:57.236901   38829 command_runner.go:130] > # hooks_dir = [
	I1213 18:35:57.236908   38829 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 18:35:57.236915   38829 command_runner.go:130] > # ]
	I1213 18:35:57.236921   38829 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 18:35:57.236931   38829 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 18:35:57.236939   38829 command_runner.go:130] > # its default mounts from the following two files:
	I1213 18:35:57.236942   38829 command_runner.go:130] > #
	I1213 18:35:57.236949   38829 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 18:35:57.236959   38829 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 18:35:57.236964   38829 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 18:35:57.236967   38829 command_runner.go:130] > #
	I1213 18:35:57.236974   38829 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 18:35:57.236984   38829 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 18:35:57.236990   38829 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 18:35:57.236996   38829 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 18:35:57.237024   38829 command_runner.go:130] > #
	I1213 18:35:57.237029   38829 command_runner.go:130] > # default_mounts_file = ""
	I1213 18:35:57.237035   38829 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 18:35:57.237044   38829 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 18:35:57.237052   38829 command_runner.go:130] > # pids_limit = -1
	I1213 18:35:57.237058   38829 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 18:35:57.237065   38829 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 18:35:57.237075   38829 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 18:35:57.237084   38829 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 18:35:57.237092   38829 command_runner.go:130] > # log_size_max = -1
	I1213 18:35:57.237099   38829 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 18:35:57.237104   38829 command_runner.go:130] > # log_to_journald = false
	I1213 18:35:57.237114   38829 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 18:35:57.237119   38829 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 18:35:57.237125   38829 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 18:35:57.237130   38829 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 18:35:57.237137   38829 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 18:35:57.237145   38829 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 18:35:57.237151   38829 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 18:35:57.237155   38829 command_runner.go:130] > # read_only = false
	I1213 18:35:57.237162   38829 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 18:35:57.237173   38829 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 18:35:57.237181   38829 command_runner.go:130] > # live configuration reload.
	I1213 18:35:57.237191   38829 command_runner.go:130] > # log_level = "info"
	I1213 18:35:57.237200   38829 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 18:35:57.237212   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.237216   38829 command_runner.go:130] > # log_filter = ""
	I1213 18:35:57.237222   38829 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 18:35:57.237228   38829 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 18:35:57.237237   38829 command_runner.go:130] > # separated by comma.
	I1213 18:35:57.237245   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237249   38829 command_runner.go:130] > # uid_mappings = ""
	I1213 18:35:57.237255   38829 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 18:35:57.237265   38829 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 18:35:57.237269   38829 command_runner.go:130] > # separated by comma.
	I1213 18:35:57.237277   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237284   38829 command_runner.go:130] > # gid_mappings = ""
	I1213 18:35:57.237290   38829 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 18:35:57.237297   38829 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 18:35:57.237311   38829 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 18:35:57.237319   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237323   38829 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 18:35:57.237329   38829 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 18:35:57.237339   38829 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 18:35:57.237345   38829 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 18:35:57.237354   38829 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 18:35:57.237949   38829 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 18:35:57.237966   38829 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 18:35:57.237972   38829 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 18:35:57.237979   38829 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 18:35:57.238476   38829 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 18:35:57.238490   38829 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 18:35:57.238497   38829 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 18:35:57.238503   38829 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 18:35:57.238519   38829 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 18:35:57.238932   38829 command_runner.go:130] > # drop_infra_ctr = true
	I1213 18:35:57.238947   38829 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 18:35:57.238955   38829 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 18:35:57.238963   38829 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 18:35:57.239291   38829 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 18:35:57.239306   38829 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 18:35:57.239313   38829 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 18:35:57.239319   38829 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 18:35:57.239324   38829 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 18:35:57.239634   38829 command_runner.go:130] > # shared_cpuset = ""
	I1213 18:35:57.239648   38829 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 18:35:57.239654   38829 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 18:35:57.240060   38829 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 18:35:57.240075   38829 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 18:35:57.240414   38829 command_runner.go:130] > # pinns_path = ""
	I1213 18:35:57.240427   38829 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 18:35:57.240434   38829 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 18:35:57.240846   38829 command_runner.go:130] > # enable_criu_support = true
	I1213 18:35:57.240873   38829 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 18:35:57.240881   38829 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 18:35:57.241322   38829 command_runner.go:130] > # enable_pod_events = false
	I1213 18:35:57.241336   38829 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 18:35:57.241342   38829 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 18:35:57.241756   38829 command_runner.go:130] > # default_runtime = "crun"
	I1213 18:35:57.241768   38829 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 18:35:57.241777   38829 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 18:35:57.241786   38829 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 18:35:57.241791   38829 command_runner.go:130] > # creation as a file is not desired either.
	I1213 18:35:57.241800   38829 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 18:35:57.241820   38829 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 18:35:57.242010   38829 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 18:35:57.242355   38829 command_runner.go:130] > # ]
	I1213 18:35:57.242370   38829 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 18:35:57.242386   38829 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 18:35:57.242394   38829 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 18:35:57.242400   38829 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 18:35:57.242406   38829 command_runner.go:130] > #
	I1213 18:35:57.242412   38829 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 18:35:57.242419   38829 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 18:35:57.242423   38829 command_runner.go:130] > # runtime_type = "oci"
	I1213 18:35:57.242427   38829 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 18:35:57.242434   38829 command_runner.go:130] > # inherit_default_runtime = false
	I1213 18:35:57.242441   38829 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 18:35:57.242445   38829 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 18:35:57.242449   38829 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 18:35:57.242460   38829 command_runner.go:130] > # monitor_env = []
	I1213 18:35:57.242465   38829 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 18:35:57.242470   38829 command_runner.go:130] > # allowed_annotations = []
	I1213 18:35:57.242487   38829 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 18:35:57.242491   38829 command_runner.go:130] > # no_sync_log = false
	I1213 18:35:57.242496   38829 command_runner.go:130] > # default_annotations = {}
	I1213 18:35:57.242500   38829 command_runner.go:130] > # stream_websockets = false
	I1213 18:35:57.242507   38829 command_runner.go:130] > # seccomp_profile = ""
	I1213 18:35:57.242553   38829 command_runner.go:130] > # Where:
	I1213 18:35:57.242564   38829 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 18:35:57.242570   38829 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 18:35:57.242577   38829 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 18:35:57.242583   38829 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 18:35:57.242587   38829 command_runner.go:130] > #   in $PATH.
	I1213 18:35:57.242593   38829 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 18:35:57.242598   38829 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 18:35:57.242614   38829 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 18:35:57.242620   38829 command_runner.go:130] > #   state.
	I1213 18:35:57.242626   38829 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 18:35:57.242633   38829 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 18:35:57.242641   38829 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 18:35:57.242647   38829 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 18:35:57.242652   38829 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 18:35:57.242659   38829 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 18:35:57.242665   38829 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 18:35:57.242671   38829 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 18:35:57.242684   38829 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 18:35:57.242694   38829 command_runner.go:130] > #   The currently recognized values are:
	I1213 18:35:57.242701   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 18:35:57.242709   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 18:35:57.242718   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 18:35:57.242724   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 18:35:57.242736   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 18:35:57.242745   38829 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 18:35:57.242761   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 18:35:57.242774   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 18:35:57.242781   38829 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 18:35:57.242788   38829 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 18:35:57.242795   38829 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 18:35:57.242802   38829 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 18:35:57.242813   38829 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 18:35:57.242824   38829 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 18:35:57.242842   38829 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 18:35:57.242850   38829 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 18:35:57.242861   38829 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 18:35:57.242865   38829 command_runner.go:130] > #   deprecated option "conmon".
	I1213 18:35:57.242873   38829 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 18:35:57.242881   38829 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 18:35:57.242888   38829 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 18:35:57.242894   38829 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 18:35:57.242911   38829 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 18:35:57.242917   38829 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 18:35:57.242924   38829 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 18:35:57.242933   38829 command_runner.go:130] > #   conmon-rs by using:
	I1213 18:35:57.242941   38829 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 18:35:57.242954   38829 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 18:35:57.242962   38829 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 18:35:57.242973   38829 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 18:35:57.242978   38829 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 18:35:57.242995   38829 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 18:35:57.243003   38829 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 18:35:57.243008   38829 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 18:35:57.243017   38829 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 18:35:57.243027   38829 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 18:35:57.243033   38829 command_runner.go:130] > #   when a machine crash happens.
	I1213 18:35:57.243040   38829 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 18:35:57.243049   38829 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 18:35:57.243065   38829 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 18:35:57.243070   38829 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 18:35:57.243076   38829 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 18:35:57.243084   38829 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 18:35:57.243094   38829 command_runner.go:130] > #
	I1213 18:35:57.243099   38829 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 18:35:57.243102   38829 command_runner.go:130] > #
	I1213 18:35:57.243113   38829 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 18:35:57.243123   38829 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 18:35:57.243126   38829 command_runner.go:130] > #
	I1213 18:35:57.243139   38829 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 18:35:57.243153   38829 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 18:35:57.243157   38829 command_runner.go:130] > #
	I1213 18:35:57.243163   38829 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 18:35:57.243170   38829 command_runner.go:130] > # feature.
	I1213 18:35:57.243173   38829 command_runner.go:130] > #
	I1213 18:35:57.243179   38829 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 18:35:57.243186   38829 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 18:35:57.243196   38829 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 18:35:57.243208   38829 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 18:35:57.243219   38829 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 18:35:57.243222   38829 command_runner.go:130] > #
	I1213 18:35:57.243229   38829 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 18:35:57.243235   38829 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 18:35:57.243256   38829 command_runner.go:130] > #
	I1213 18:35:57.243267   38829 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 18:35:57.243274   38829 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 18:35:57.243283   38829 command_runner.go:130] > #
	I1213 18:35:57.243294   38829 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 18:35:57.243301   38829 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 18:35:57.243304   38829 command_runner.go:130] > # limitation.
	I1213 18:35:57.243341   38829 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 18:35:57.243623   38829 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 18:35:57.243757   38829 command_runner.go:130] > runtime_type = ""
	I1213 18:35:57.244003   38829 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 18:35:57.244255   38829 command_runner.go:130] > inherit_default_runtime = false
	I1213 18:35:57.244399   38829 command_runner.go:130] > runtime_config_path = ""
	I1213 18:35:57.244539   38829 command_runner.go:130] > container_min_memory = ""
	I1213 18:35:57.244777   38829 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 18:35:57.245055   38829 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 18:35:57.245214   38829 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 18:35:57.245448   38829 command_runner.go:130] > allowed_annotations = [
	I1213 18:35:57.245605   38829 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 18:35:57.245830   38829 command_runner.go:130] > ]
	I1213 18:35:57.246064   38829 command_runner.go:130] > privileged_without_host_devices = false
	I1213 18:35:57.246554   38829 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 18:35:57.246808   38829 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 18:35:57.246935   38829 command_runner.go:130] > runtime_type = ""
	I1213 18:35:57.247167   38829 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 18:35:57.247404   38829 command_runner.go:130] > inherit_default_runtime = false
	I1213 18:35:57.247591   38829 command_runner.go:130] > runtime_config_path = ""
	I1213 18:35:57.247761   38829 command_runner.go:130] > container_min_memory = ""
	I1213 18:35:57.248046   38829 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 18:35:57.248332   38829 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 18:35:57.248492   38829 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 18:35:57.248957   38829 command_runner.go:130] > privileged_without_host_devices = false
	I1213 18:35:57.249339   38829 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 18:35:57.249353   38829 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 18:35:57.249360   38829 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 18:35:57.249369   38829 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 18:35:57.249380   38829 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 18:35:57.249391   38829 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 18:35:57.249420   38829 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 18:35:57.249432   38829 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 18:35:57.249442   38829 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 18:35:57.249454   38829 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 18:35:57.249460   38829 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 18:35:57.249474   38829 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 18:35:57.249483   38829 command_runner.go:130] > # Example:
	I1213 18:35:57.249488   38829 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 18:35:57.249494   38829 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 18:35:57.249507   38829 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 18:35:57.249513   38829 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 18:35:57.249522   38829 command_runner.go:130] > # cpuset = "0-1"
	I1213 18:35:57.249525   38829 command_runner.go:130] > # cpushares = "5"
	I1213 18:35:57.249529   38829 command_runner.go:130] > # cpuquota = "1000"
	I1213 18:35:57.249533   38829 command_runner.go:130] > # cpuperiod = "100000"
	I1213 18:35:57.249548   38829 command_runner.go:130] > # cpulimit = "35"
	I1213 18:35:57.249556   38829 command_runner.go:130] > # Where:
	I1213 18:35:57.249560   38829 command_runner.go:130] > # The workload name is workload-type.
	I1213 18:35:57.249568   38829 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 18:35:57.249574   38829 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 18:35:57.249585   38829 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 18:35:57.249594   38829 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 18:35:57.249604   38829 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 18:35:57.249739   38829 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 18:35:57.249752   38829 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 18:35:57.249757   38829 command_runner.go:130] > # Default value is set to true
	I1213 18:35:57.250196   38829 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 18:35:57.250210   38829 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 18:35:57.250216   38829 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 18:35:57.250220   38829 command_runner.go:130] > # Default value is set to 'false'
	I1213 18:35:57.250699   38829 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 18:35:57.250712   38829 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 18:35:57.250722   38829 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 18:35:57.251071   38829 command_runner.go:130] > # timezone = ""
	I1213 18:35:57.251082   38829 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 18:35:57.251086   38829 command_runner.go:130] > #
	I1213 18:35:57.251093   38829 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 18:35:57.251100   38829 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 18:35:57.251103   38829 command_runner.go:130] > [crio.image]
	I1213 18:35:57.251109   38829 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 18:35:57.251555   38829 command_runner.go:130] > # default_transport = "docker://"
	I1213 18:35:57.251569   38829 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 18:35:57.251576   38829 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 18:35:57.251964   38829 command_runner.go:130] > # global_auth_file = ""
	I1213 18:35:57.251977   38829 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 18:35:57.251982   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.252443   38829 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 18:35:57.252459   38829 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 18:35:57.252468   38829 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 18:35:57.252474   38829 command_runner.go:130] > # This option supports live configuration reload.
	I1213 18:35:57.252817   38829 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 18:35:57.252830   38829 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 18:35:57.252837   38829 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 18:35:57.252844   38829 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 18:35:57.252849   38829 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 18:35:57.253309   38829 command_runner.go:130] > # pause_command = "/pause"
	I1213 18:35:57.253323   38829 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 18:35:57.253330   38829 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 18:35:57.253336   38829 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 18:35:57.253342   38829 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 18:35:57.253349   38829 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 18:35:57.253355   38829 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 18:35:57.253590   38829 command_runner.go:130] > # pinned_images = [
	I1213 18:35:57.253600   38829 command_runner.go:130] > # ]
	I1213 18:35:57.253607   38829 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 18:35:57.253614   38829 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 18:35:57.253621   38829 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 18:35:57.253627   38829 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 18:35:57.253636   38829 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 18:35:57.253910   38829 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 18:35:57.253925   38829 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 18:35:57.253939   38829 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 18:35:57.253949   38829 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 18:35:57.253960   38829 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 18:35:57.253967   38829 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 18:35:57.253980   38829 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 18:35:57.253986   38829 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 18:35:57.253995   38829 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 18:35:57.254000   38829 command_runner.go:130] > # changing them here.
	I1213 18:35:57.254012   38829 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 18:35:57.254016   38829 command_runner.go:130] > # insecure_registries = [
	I1213 18:35:57.254268   38829 command_runner.go:130] > # ]
	I1213 18:35:57.254281   38829 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 18:35:57.254287   38829 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 18:35:57.254424   38829 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 18:35:57.254436   38829 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 18:35:57.254580   38829 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 18:35:57.254592   38829 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 18:35:57.254600   38829 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 18:35:57.254897   38829 command_runner.go:130] > # auto_reload_registries = false
	I1213 18:35:57.254910   38829 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 18:35:57.254920   38829 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 18:35:57.254926   38829 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 18:35:57.254930   38829 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 18:35:57.254935   38829 command_runner.go:130] > # The mode of short name resolution.
	I1213 18:35:57.254941   38829 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 18:35:57.254949   38829 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 18:35:57.254965   38829 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 18:35:57.254970   38829 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 18:35:57.254982   38829 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 18:35:57.254988   38829 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 18:35:57.255234   38829 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 18:35:57.255247   38829 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 18:35:57.255251   38829 command_runner.go:130] > # CNI plugins.
	I1213 18:35:57.255254   38829 command_runner.go:130] > [crio.network]
	I1213 18:35:57.255260   38829 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 18:35:57.255266   38829 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 18:35:57.255275   38829 command_runner.go:130] > # cni_default_network = ""
	I1213 18:35:57.255283   38829 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 18:35:57.255416   38829 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 18:35:57.255429   38829 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 18:35:57.255573   38829 command_runner.go:130] > # plugin_dirs = [
	I1213 18:35:57.255807   38829 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 18:35:57.255816   38829 command_runner.go:130] > # ]
	I1213 18:35:57.255821   38829 command_runner.go:130] > # List of included pod metrics.
	I1213 18:35:57.255825   38829 command_runner.go:130] > # included_pod_metrics = [
	I1213 18:35:57.255828   38829 command_runner.go:130] > # ]
	I1213 18:35:57.255834   38829 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 18:35:57.255838   38829 command_runner.go:130] > [crio.metrics]
	I1213 18:35:57.255843   38829 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 18:35:57.255847   38829 command_runner.go:130] > # enable_metrics = false
	I1213 18:35:57.255851   38829 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 18:35:57.255867   38829 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 18:35:57.255879   38829 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 18:35:57.255889   38829 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 18:35:57.255900   38829 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 18:35:57.255905   38829 command_runner.go:130] > # metrics_collectors = [
	I1213 18:35:57.256016   38829 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 18:35:57.256027   38829 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 18:35:57.256031   38829 command_runner.go:130] > # 	"containers_oom_total",
	I1213 18:35:57.256331   38829 command_runner.go:130] > # 	"processes_defunct",
	I1213 18:35:57.256341   38829 command_runner.go:130] > # 	"operations_total",
	I1213 18:35:57.256346   38829 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 18:35:57.256351   38829 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 18:35:57.256361   38829 command_runner.go:130] > # 	"operations_errors_total",
	I1213 18:35:57.256365   38829 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 18:35:57.256370   38829 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 18:35:57.256374   38829 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 18:35:57.257117   38829 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 18:35:57.257132   38829 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 18:35:57.257137   38829 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 18:35:57.257143   38829 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 18:35:57.257155   38829 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 18:35:57.257161   38829 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 18:35:57.257170   38829 command_runner.go:130] > # ]
	I1213 18:35:57.257177   38829 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 18:35:57.257185   38829 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 18:35:57.257191   38829 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 18:35:57.257199   38829 command_runner.go:130] > # metrics_port = 9090
	I1213 18:35:57.257204   38829 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 18:35:57.257212   38829 command_runner.go:130] > # metrics_socket = ""
	I1213 18:35:57.257233   38829 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 18:35:57.257245   38829 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 18:35:57.257252   38829 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 18:35:57.257260   38829 command_runner.go:130] > # certificate on any modification event.
	I1213 18:35:57.257270   38829 command_runner.go:130] > # metrics_cert = ""
	I1213 18:35:57.257276   38829 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 18:35:57.257285   38829 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 18:35:57.257289   38829 command_runner.go:130] > # metrics_key = ""
	I1213 18:35:57.257299   38829 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 18:35:57.257318   38829 command_runner.go:130] > [crio.tracing]
	I1213 18:35:57.257325   38829 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 18:35:57.257329   38829 command_runner.go:130] > # enable_tracing = false
	I1213 18:35:57.257339   38829 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 18:35:57.257343   38829 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 18:35:57.257354   38829 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 18:35:57.257366   38829 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 18:35:57.257381   38829 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 18:35:57.257393   38829 command_runner.go:130] > [crio.nri]
	I1213 18:35:57.257402   38829 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 18:35:57.257406   38829 command_runner.go:130] > # enable_nri = true
	I1213 18:35:57.257410   38829 command_runner.go:130] > # NRI socket to listen on.
	I1213 18:35:57.257415   38829 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 18:35:57.257423   38829 command_runner.go:130] > # NRI plugin directory to use.
	I1213 18:35:57.257428   38829 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 18:35:57.257437   38829 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 18:35:57.257442   38829 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 18:35:57.257457   38829 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 18:35:57.257514   38829 command_runner.go:130] > # nri_disable_connections = false
	I1213 18:35:57.257530   38829 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 18:35:57.257535   38829 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 18:35:57.257544   38829 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 18:35:57.257549   38829 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 18:35:57.257558   38829 command_runner.go:130] > # NRI default validator configuration.
	I1213 18:35:57.257566   38829 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 18:35:57.257576   38829 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 18:35:57.257584   38829 command_runner.go:130] > # can be restricted/rejected:
	I1213 18:35:57.257588   38829 command_runner.go:130] > # - OCI hook injection
	I1213 18:35:57.257597   38829 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 18:35:57.257609   38829 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 18:35:57.257615   38829 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 18:35:57.257624   38829 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 18:35:57.257632   38829 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 18:35:57.257642   38829 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 18:35:57.257652   38829 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 18:35:57.257660   38829 command_runner.go:130] > #
	I1213 18:35:57.257664   38829 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 18:35:57.257672   38829 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 18:35:57.257686   38829 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 18:35:57.257692   38829 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 18:35:57.257699   38829 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 18:35:57.257712   38829 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 18:35:57.257721   38829 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 18:35:57.257726   38829 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 18:35:57.257732   38829 command_runner.go:130] > # ]
	I1213 18:35:57.257738   38829 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 18:35:57.257747   38829 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 18:35:57.257763   38829 command_runner.go:130] > [crio.stats]
	I1213 18:35:57.257772   38829 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 18:35:57.257778   38829 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 18:35:57.257782   38829 command_runner.go:130] > # stats_collection_period = 0
	I1213 18:35:57.257792   38829 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 18:35:57.257800   38829 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 18:35:57.257809   38829 command_runner.go:130] > # collection_period = 0
	I1213 18:35:57.259571   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.21464252Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 18:35:57.259589   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214677794Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 18:35:57.259613   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214706635Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 18:35:57.259625   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.21473084Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 18:35:57.259635   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.214801782Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:35:57.259643   38829 command_runner.go:130] ! time="2025-12-13T18:35:57.215251382Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 18:35:57.259658   38829 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 18:35:57.259749   38829 cni.go:84] Creating CNI manager for ""
	I1213 18:35:57.259765   38829 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:35:57.259800   38829 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 18:35:57.259831   38829 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-752103 NodeName:functional-752103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 18:35:57.259972   38829 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-752103"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 18:35:57.260053   38829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 18:35:57.267743   38829 command_runner.go:130] > kubeadm
	I1213 18:35:57.267764   38829 command_runner.go:130] > kubectl
	I1213 18:35:57.267769   38829 command_runner.go:130] > kubelet
	I1213 18:35:57.268114   38829 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 18:35:57.268211   38829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 18:35:57.275739   38829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 18:35:57.288967   38829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 18:35:57.301790   38829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 18:35:57.314673   38829 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 18:35:57.318486   38829 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 18:35:57.318580   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:57.437137   38829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:35:57.456752   38829 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103 for IP: 192.168.49.2
	I1213 18:35:57.456776   38829 certs.go:195] generating shared ca certs ...
	I1213 18:35:57.456809   38829 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:57.456950   38829 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 18:35:57.457003   38829 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 18:35:57.457091   38829 certs.go:257] generating profile certs ...
	I1213 18:35:57.457200   38829 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key
	I1213 18:35:57.457253   38829 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key.597c6026
	I1213 18:35:57.457304   38829 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key
	I1213 18:35:57.457312   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 18:35:57.457324   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 18:35:57.457340   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 18:35:57.457356   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 18:35:57.457367   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 18:35:57.457383   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 18:35:57.457395   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 18:35:57.457405   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 18:35:57.457457   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 18:35:57.457490   38829 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 18:35:57.457499   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 18:35:57.457529   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 18:35:57.457562   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 18:35:57.457593   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 18:35:57.457644   38829 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:35:57.457676   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.457691   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.457705   38829 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.458319   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 18:35:57.479443   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 18:35:57.498974   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 18:35:57.520210   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 18:35:57.540966   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 18:35:57.558774   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 18:35:57.576442   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 18:35:57.593767   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 18:35:57.611061   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 18:35:57.628952   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 18:35:57.646627   38829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 18:35:57.664290   38829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 18:35:57.677693   38829 ssh_runner.go:195] Run: openssl version
	I1213 18:35:57.683465   38829 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 18:35:57.683918   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.691710   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 18:35:57.699237   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.702943   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.702972   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.703038   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 18:35:57.743436   38829 command_runner.go:130] > 51391683
	I1213 18:35:57.743914   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 18:35:57.751320   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.758498   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 18:35:57.765907   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769321   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769343   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.769391   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 18:35:57.809666   38829 command_runner.go:130] > 3ec20f2e
	I1213 18:35:57.810146   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 18:35:57.818335   38829 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.826660   38829 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 18:35:57.834746   38829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838666   38829 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838764   38829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.838851   38829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:35:57.879619   38829 command_runner.go:130] > b5213941
	I1213 18:35:57.880088   38829 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 18:35:57.887654   38829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:35:57.891412   38829 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:35:57.891437   38829 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 18:35:57.891445   38829 command_runner.go:130] > Device: 259,1	Inode: 1056084     Links: 1
	I1213 18:35:57.891452   38829 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 18:35:57.891459   38829 command_runner.go:130] > Access: 2025-12-13 18:31:50.964784337 +0000
	I1213 18:35:57.891465   38829 command_runner.go:130] > Modify: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891470   38829 command_runner.go:130] > Change: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891475   38829 command_runner.go:130] >  Birth: 2025-12-13 18:27:46.490235937 +0000
	I1213 18:35:57.891539   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 18:35:57.937033   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:57.937482   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 18:35:57.978137   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:57.978564   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 18:35:58.033951   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.034441   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 18:35:58.075936   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.076412   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 18:35:58.118212   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.118338   38829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 18:35:58.159347   38829 command_runner.go:130] > Certificate will not expire
	I1213 18:35:58.159444   38829 kubeadm.go:401] StartCluster: {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:35:58.159559   38829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:35:58.159642   38829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:35:58.186428   38829 cri.go:89] found id: ""
	I1213 18:35:58.186502   38829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 18:35:58.193645   38829 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 18:35:58.193670   38829 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 18:35:58.193678   38829 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 18:35:58.194604   38829 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 18:35:58.194674   38829 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 18:35:58.194749   38829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 18:35:58.202237   38829 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:35:58.202735   38829 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-752103" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.202850   38829 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-2686/kubeconfig needs updating (will repair): [kubeconfig missing "functional-752103" cluster setting kubeconfig missing "functional-752103" context setting]
	I1213 18:35:58.203123   38829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.203546   38829 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.203705   38829 kapi.go:59] client config for functional-752103: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 18:35:58.204223   38829 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 18:35:58.204247   38829 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 18:35:58.204258   38829 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 18:35:58.204263   38829 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 18:35:58.204267   38829 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 18:35:58.204300   38829 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 18:35:58.204536   38829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 18:35:58.212005   38829 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 18:35:58.212037   38829 kubeadm.go:602] duration metric: took 17.346627ms to restartPrimaryControlPlane
	I1213 18:35:58.212045   38829 kubeadm.go:403] duration metric: took 52.608163ms to StartCluster
	I1213 18:35:58.212060   38829 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.212116   38829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.212712   38829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:35:58.212903   38829 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 18:35:58.213488   38829 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:35:58.213543   38829 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 18:35:58.213607   38829 addons.go:70] Setting storage-provisioner=true in profile "functional-752103"
	I1213 18:35:58.213620   38829 addons.go:239] Setting addon storage-provisioner=true in "functional-752103"
	I1213 18:35:58.213643   38829 host.go:66] Checking if "functional-752103" exists ...
	I1213 18:35:58.214229   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.214390   38829 addons.go:70] Setting default-storageclass=true in profile "functional-752103"
	I1213 18:35:58.214412   38829 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-752103"
	I1213 18:35:58.214713   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.219256   38829 out.go:179] * Verifying Kubernetes components...
	I1213 18:35:58.222143   38829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:35:58.244199   38829 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 18:35:58.247016   38829 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:58.247042   38829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 18:35:58.247112   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:58.257520   38829 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:35:58.257687   38829 kapi.go:59] client config for functional-752103: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 18:35:58.257971   38829 addons.go:239] Setting addon default-storageclass=true in "functional-752103"
	I1213 18:35:58.258004   38829 host.go:66] Checking if "functional-752103" exists ...
	I1213 18:35:58.258425   38829 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:35:58.277237   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:58.306835   38829 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:58.306855   38829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 18:35:58.306918   38829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:35:58.340724   38829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:35:58.416694   38829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:35:58.451165   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:58.493354   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.080268   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.080307   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080337   38829 retry.go:31] will retry after 153.209012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080385   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.080398   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080404   38829 retry.go:31] will retry after 291.62792ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.080464   38829 node_ready.go:35] waiting up to 6m0s for node "functional-752103" to be "Ready" ...
	I1213 18:35:59.080578   38829 type.go:168] "Request Body" body=""
	I1213 18:35:59.080656   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:35:59.080963   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:35:59.234362   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:59.300149   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.300200   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.300219   38829 retry.go:31] will retry after 511.331502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.372301   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.426538   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.430102   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.430132   38829 retry.go:31] will retry after 426.700032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.581486   38829 type.go:168] "Request Body" body=""
	I1213 18:35:59.581586   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:35:59.581963   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:35:59.812414   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:35:59.857973   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:35:59.893611   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.893688   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.893723   38829 retry.go:31] will retry after 310.068383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.947559   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:35:59.947617   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:35:59.947640   38829 retry.go:31] will retry after 829.65637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.080795   38829 type.go:168] "Request Body" body=""
	I1213 18:36:00.080875   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:00.081240   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:00.205923   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:00.416702   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:00.416818   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.416873   38829 retry.go:31] will retry after 579.133816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.581369   38829 type.go:168] "Request Body" body=""
	I1213 18:36:00.581557   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:00.582010   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:00.778452   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:00.837536   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:00.837585   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.837604   38829 retry.go:31] will retry after 974.075863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:00.996954   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:01.059672   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:01.059714   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.059763   38829 retry.go:31] will retry after 1.136000803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.080856   38829 type.go:168] "Request Body" body=""
	I1213 18:36:01.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:01.081261   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:01.081306   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:01.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:36:01.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:01.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:01.812632   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:01.883701   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:01.883803   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:01.883825   38829 retry.go:31] will retry after 921.808005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.081109   38829 type.go:168] "Request Body" body=""
	I1213 18:36:02.081198   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:02.081477   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:02.196877   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:02.253907   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:02.257605   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.257637   38829 retry.go:31] will retry after 1.546462752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.581141   38829 type.go:168] "Request Body" body=""
	I1213 18:36:02.581286   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:02.581677   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:02.805901   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:02.889297   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:02.893182   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:02.893216   38829 retry.go:31] will retry after 1.247577285s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:03.081687   38829 type.go:168] "Request Body" body=""
	I1213 18:36:03.081764   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:03.082108   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:03.082162   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:03.580643   38829 type.go:168] "Request Body" body=""
	I1213 18:36:03.580714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:03.580995   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:03.804445   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:03.865304   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:03.865353   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:03.865372   38829 retry.go:31] will retry after 3.450909707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.080758   38829 type.go:168] "Request Body" body=""
	I1213 18:36:04.080837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:04.081202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:04.141517   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:04.204625   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:04.204670   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.204689   38829 retry.go:31] will retry after 3.409599879s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:04.581166   38829 type.go:168] "Request Body" body=""
	I1213 18:36:04.581250   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:04.581566   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:05.081373   38829 type.go:168] "Request Body" body=""
	I1213 18:36:05.081443   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:05.081739   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:05.581581   38829 type.go:168] "Request Body" body=""
	I1213 18:36:05.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:05.581992   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:05.582049   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:06.080707   38829 type.go:168] "Request Body" body=""
	I1213 18:36:06.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:06.081099   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:06.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:36:06.580849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:06.581220   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:36:07.080806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:07.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.316533   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:07.393411   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:07.397246   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.397278   38829 retry.go:31] will retry after 2.442447522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.581582   38829 type.go:168] "Request Body" body=""
	I1213 18:36:07.581660   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:07.582007   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:07.615412   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:07.670357   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:07.674453   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:07.674491   38829 retry.go:31] will retry after 4.254133001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:08.080696   38829 type.go:168] "Request Body" body=""
	I1213 18:36:08.080805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:08.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:08.081221   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:08.581149   38829 type.go:168] "Request Body" body=""
	I1213 18:36:08.581249   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:08.581593   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.081583   38829 type.go:168] "Request Body" body=""
	I1213 18:36:09.081656   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:09.081980   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.581654   38829 type.go:168] "Request Body" body=""
	I1213 18:36:09.581729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:09.582054   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:09.840484   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:09.900307   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:09.900343   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:09.900361   38829 retry.go:31] will retry after 4.640117862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:10.081715   38829 type.go:168] "Request Body" body=""
	I1213 18:36:10.081794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:10.082116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:10.082183   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:10.580872   38829 type.go:168] "Request Body" body=""
	I1213 18:36:10.580959   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:10.581373   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.080692   38829 type.go:168] "Request Body" body=""
	I1213 18:36:11.080776   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:11.081115   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.580824   38829 type.go:168] "Request Body" body=""
	I1213 18:36:11.580896   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:11.581249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:11.928812   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:11.987432   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:11.987481   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:11.987500   38829 retry.go:31] will retry after 8.251825899s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:12.081733   38829 type.go:168] "Request Body" body=""
	I1213 18:36:12.081819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:12.082391   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:12.082470   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:12.580663   38829 type.go:168] "Request Body" body=""
	I1213 18:36:12.580742   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:12.581100   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:13.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:36:13.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:13.081119   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:13.580828   38829 type.go:168] "Request Body" body=""
	I1213 18:36:13.580900   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:13.581257   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:14.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:36:14.081075   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:14.081364   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:14.540746   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:14.581321   38829 type.go:168] "Request Body" body=""
	I1213 18:36:14.581395   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:14.581672   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:14.581722   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:14.600534   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:14.600587   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:14.600605   38829 retry.go:31] will retry after 8.957681085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:15.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:36:15.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:15.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:15.580789   38829 type.go:168] "Request Body" body=""
	I1213 18:36:15.580868   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:15.581235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:16.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:36:16.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:16.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:16.580886   38829 type.go:168] "Request Body" body=""
	I1213 18:36:16.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:16.581330   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:17.081614   38829 type.go:168] "Request Body" body=""
	I1213 18:36:17.081684   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:17.081955   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:17.081995   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:17.580662   38829 type.go:168] "Request Body" body=""
	I1213 18:36:17.580732   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:17.581063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:18.080650   38829 type.go:168] "Request Body" body=""
	I1213 18:36:18.080721   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:18.081108   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:18.580672   38829 type.go:168] "Request Body" body=""
	I1213 18:36:18.580742   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:18.581079   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:19.081047   38829 type.go:168] "Request Body" body=""
	I1213 18:36:19.081115   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:19.081424   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:19.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:36:19.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:19.581191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:19.581284   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:20.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:36:20.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:20.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:20.239601   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:20.301361   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:20.301401   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:20.301420   38829 retry.go:31] will retry after 6.59814029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:20.580747   38829 type.go:168] "Request Body" body=""
	I1213 18:36:20.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:20.581125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:21.080844   38829 type.go:168] "Request Body" body=""
	I1213 18:36:21.080933   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:21.081353   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:21.580686   38829 type.go:168] "Request Body" body=""
	I1213 18:36:21.580762   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:21.581080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:22.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:36:22.080884   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:22.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:22.081274   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:22.580705   38829 type.go:168] "Request Body" body=""
	I1213 18:36:22.580799   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:22.581136   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.080675   38829 type.go:168] "Request Body" body=""
	I1213 18:36:23.080747   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:23.081137   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.558605   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:23.581258   38829 type.go:168] "Request Body" body=""
	I1213 18:36:23.581331   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:23.581605   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:23.617607   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:23.617653   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:23.617671   38829 retry.go:31] will retry after 14.669686806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:24.081419   38829 type.go:168] "Request Body" body=""
	I1213 18:36:24.081508   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:24.081878   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:24.081930   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:24.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:36:24.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:24.581024   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:25.080794   38829 type.go:168] "Request Body" body=""
	I1213 18:36:25.080880   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:25.081347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:25.580742   38829 type.go:168] "Request Body" body=""
	I1213 18:36:25.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:25.581207   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:26.080781   38829 type.go:168] "Request Body" body=""
	I1213 18:36:26.080854   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:26.081166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:26.580764   38829 type.go:168] "Request Body" body=""
	I1213 18:36:26.580862   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:26.581247   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:26.581300   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:26.900727   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:26.960607   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:26.960668   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:26.960687   38829 retry.go:31] will retry after 15.397640826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:27.080883   38829 type.go:168] "Request Body" body=""
	I1213 18:36:27.080957   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:27.081297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:27.580637   38829 type.go:168] "Request Body" body=""
	I1213 18:36:27.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:27.580956   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:28.080641   38829 type.go:168] "Request Body" body=""
	I1213 18:36:28.080752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:28.081081   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:28.580963   38829 type.go:168] "Request Body" body=""
	I1213 18:36:28.581049   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:28.581366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:28.581418   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:29.081265   38829 type.go:168] "Request Body" body=""
	I1213 18:36:29.081330   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:29.081585   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:29.581341   38829 type.go:168] "Request Body" body=""
	I1213 18:36:29.581414   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:29.581724   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:30.083283   38829 type.go:168] "Request Body" body=""
	I1213 18:36:30.083370   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:30.083708   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:30.581559   38829 type.go:168] "Request Body" body=""
	I1213 18:36:30.581633   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:30.581902   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:30.581946   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:31.081665   38829 type.go:168] "Request Body" body=""
	I1213 18:36:31.081736   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:31.082102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:31.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:36:31.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:31.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:32.080588   38829 type.go:168] "Request Body" body=""
	I1213 18:36:32.080654   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:32.080909   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:32.581657   38829 type.go:168] "Request Body" body=""
	I1213 18:36:32.581734   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:32.582056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:32.582116   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:33.080787   38829 type.go:168] "Request Body" body=""
	I1213 18:36:33.080867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:33.081206   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:33.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:36:33.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:33.580998   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:34.080961   38829 type.go:168] "Request Body" body=""
	I1213 18:36:34.081065   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:34.081433   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:34.581228   38829 type.go:168] "Request Body" body=""
	I1213 18:36:34.581300   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:34.581636   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:35.081408   38829 type.go:168] "Request Body" body=""
	I1213 18:36:35.081478   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:35.081747   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:35.081790   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:35.581492   38829 type.go:168] "Request Body" body=""
	I1213 18:36:35.581568   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:35.581859   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:36.081553   38829 type.go:168] "Request Body" body=""
	I1213 18:36:36.081623   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:36.081928   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:36.581632   38829 type.go:168] "Request Body" body=""
	I1213 18:36:36.581711   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:36.582018   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:37.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:36:37.080804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:37.081189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:37.580917   38829 type.go:168] "Request Body" body=""
	I1213 18:36:37.580993   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:37.581352   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:37.581446   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:38.080688   38829 type.go:168] "Request Body" body=""
	I1213 18:36:38.080770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:38.081101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:38.287495   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:36:38.357240   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:38.360822   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:38.360853   38829 retry.go:31] will retry after 30.28485436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:38.581302   38829 type.go:168] "Request Body" body=""
	I1213 18:36:38.581374   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:38.581695   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:39.081218   38829 type.go:168] "Request Body" body=""
	I1213 18:36:39.081295   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:39.081664   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:39.581465   38829 type.go:168] "Request Body" body=""
	I1213 18:36:39.581533   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:39.581794   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:39.581852   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:40.081640   38829 type.go:168] "Request Body" body=""
	I1213 18:36:40.081724   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:40.082071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:40.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:36:40.580788   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:40.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:41.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:36:41.080801   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:41.081086   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:41.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:36:41.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:41.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:42.080831   38829 type.go:168] "Request Body" body=""
	I1213 18:36:42.080909   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:42.081302   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:42.081363   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:42.358603   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:42.430743   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:42.430803   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:42.430822   38829 retry.go:31] will retry after 12.093455046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:42.581106   38829 type.go:168] "Request Body" body=""
	I1213 18:36:42.581178   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:42.581444   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:43.081272   38829 type.go:168] "Request Body" body=""
	I1213 18:36:43.081354   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:43.081648   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:43.580658   38829 type.go:168] "Request Body" body=""
	I1213 18:36:43.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:43.581055   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:44.080685   38829 type.go:168] "Request Body" body=""
	I1213 18:36:44.080795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:44.081152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:44.580685   38829 type.go:168] "Request Body" body=""
	I1213 18:36:44.580759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:44.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:44.581161   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:45.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:36:45.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:45.081226   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:45.581071   38829 type.go:168] "Request Body" body=""
	I1213 18:36:45.581137   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:45.581415   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:46.081136   38829 type.go:168] "Request Body" body=""
	I1213 18:36:46.081217   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:46.081567   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:46.581397   38829 type.go:168] "Request Body" body=""
	I1213 18:36:46.581468   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:46.581797   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:46.581852   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:47.081586   38829 type.go:168] "Request Body" body=""
	I1213 18:36:47.081660   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:47.081917   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:47.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:36:47.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:47.581109   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:48.080824   38829 type.go:168] "Request Body" body=""
	I1213 18:36:48.080903   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:48.081209   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:48.581175   38829 type.go:168] "Request Body" body=""
	I1213 18:36:48.581241   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:48.581504   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:49.081596   38829 type.go:168] "Request Body" body=""
	I1213 18:36:49.081669   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:49.082029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:49.082084   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:49.580622   38829 type.go:168] "Request Body" body=""
	I1213 18:36:49.580704   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:49.581055   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:50.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:36:50.080823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:50.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:50.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:36:50.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:50.581174   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:51.080882   38829 type.go:168] "Request Body" body=""
	I1213 18:36:51.080963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:51.081341   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:51.580687   38829 type.go:168] "Request Body" body=""
	I1213 18:36:51.580761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:51.581057   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:51.581110   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:52.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:36:52.080817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:52.081192   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:52.580893   38829 type.go:168] "Request Body" body=""
	I1213 18:36:52.580986   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:52.581347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:53.080709   38829 type.go:168] "Request Body" body=""
	I1213 18:36:53.080779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:53.081063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:53.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:36:53.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:53.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:53.581240   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:54.081104   38829 type.go:168] "Request Body" body=""
	I1213 18:36:54.081173   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:54.081470   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:54.525326   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:36:54.580832   38829 type.go:168] "Request Body" body=""
	I1213 18:36:54.580898   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:54.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:54.600652   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:36:54.600694   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:54.600713   38829 retry.go:31] will retry after 41.212755678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:36:55.081498   38829 type.go:168] "Request Body" body=""
	I1213 18:36:55.081571   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:55.081915   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:55.580632   38829 type.go:168] "Request Body" body=""
	I1213 18:36:55.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:55.581066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:56.080716   38829 type.go:168] "Request Body" body=""
	I1213 18:36:56.080780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:56.081078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:56.081124   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:56.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:36:56.580847   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:56.581215   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:57.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:36:57.080904   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:57.081246   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:57.580702   38829 type.go:168] "Request Body" body=""
	I1213 18:36:57.580781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:57.581095   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:58.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:36:58.080815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:58.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:36:58.081230   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:36:58.580804   38829 type.go:168] "Request Body" body=""
	I1213 18:36:58.580886   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:58.581230   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:59.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:36:59.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:59.081167   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:36:59.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:36:59.580848   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:36:59.581262   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:00.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:37:00.081091   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:00.081411   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:00.081460   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:00.580690   38829 type.go:168] "Request Body" body=""
	I1213 18:37:00.580766   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:00.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:01.080673   38829 type.go:168] "Request Body" body=""
	I1213 18:37:01.080760   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:01.081112   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:01.580720   38829 type.go:168] "Request Body" body=""
	I1213 18:37:01.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:01.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:02.080753   38829 type.go:168] "Request Body" body=""
	I1213 18:37:02.080821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:02.081110   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:02.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:37:02.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:02.581155   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:02.581205   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:03.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:37:03.080823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:03.081153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:03.580615   38829 type.go:168] "Request Body" body=""
	I1213 18:37:03.580691   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:03.580974   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:04.080845   38829 type.go:168] "Request Body" body=""
	I1213 18:37:04.080916   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:04.081330   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:04.580902   38829 type.go:168] "Request Body" body=""
	I1213 18:37:04.581002   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:04.581380   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:04.581437   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:05.080788   38829 type.go:168] "Request Body" body=""
	I1213 18:37:05.080867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:05.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:05.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:37:05.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:05.581178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:06.080721   38829 type.go:168] "Request Body" body=""
	I1213 18:37:06.080796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:06.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:06.580658   38829 type.go:168] "Request Body" body=""
	I1213 18:37:06.580727   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:06.581063   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:07.080796   38829 type.go:168] "Request Body" body=""
	I1213 18:37:07.080883   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:07.081219   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:07.081280   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:07.580756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:07.580835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:07.581166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.080678   38829 type.go:168] "Request Body" body=""
	I1213 18:37:08.080757   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:08.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.580840   38829 type.go:168] "Request Body" body=""
	I1213 18:37:08.580922   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:08.581286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:08.646539   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:37:08.707161   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:08.707197   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:37:08.707216   38829 retry.go:31] will retry after 43.904706278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 18:37:09.080730   38829 type.go:168] "Request Body" body=""
	I1213 18:37:09.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:09.081148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:09.580688   38829 type.go:168] "Request Body" body=""
	I1213 18:37:09.580756   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:09.581080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:09.581129   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:10.080738   38829 type.go:168] "Request Body" body=""
	I1213 18:37:10.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:10.081184   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:10.580752   38829 type.go:168] "Request Body" body=""
	I1213 18:37:10.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:10.581212   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:11.080819   38829 type.go:168] "Request Body" body=""
	I1213 18:37:11.080905   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:11.081275   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:11.580750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:11.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:11.581167   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:11.581218   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:12.080976   38829 type.go:168] "Request Body" body=""
	I1213 18:37:12.081075   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:12.081413   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:12.581163   38829 type.go:168] "Request Body" body=""
	I1213 18:37:12.581239   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:12.581504   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:13.081350   38829 type.go:168] "Request Body" body=""
	I1213 18:37:13.081422   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:13.081759   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:13.581540   38829 type.go:168] "Request Body" body=""
	I1213 18:37:13.581621   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:13.581958   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:13.582012   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:14.080637   38829 type.go:168] "Request Body" body=""
	I1213 18:37:14.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:14.081037   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:14.580751   38829 type.go:168] "Request Body" body=""
	I1213 18:37:14.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:14.581126   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:15.080809   38829 type.go:168] "Request Body" body=""
	I1213 18:37:15.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:15.081289   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:15.580701   38829 type.go:168] "Request Body" body=""
	I1213 18:37:15.580784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:15.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:16.080844   38829 type.go:168] "Request Body" body=""
	I1213 18:37:16.080922   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:16.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:16.081285   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:16.580898   38829 type.go:168] "Request Body" body=""
	I1213 18:37:16.581034   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:16.581399   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:17.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:17.080737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:17.080990   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:17.580692   38829 type.go:168] "Request Body" body=""
	I1213 18:37:17.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:17.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:18.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:18.080868   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:18.081221   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:18.581194   38829 type.go:168] "Request Body" body=""
	I1213 18:37:18.581282   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:18.581589   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:18.581661   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:19.080720   38829 type.go:168] "Request Body" body=""
	I1213 18:37:19.080794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:19.081153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:19.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:37:19.580807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:19.581139   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:20.080683   38829 type.go:168] "Request Body" body=""
	I1213 18:37:20.080783   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:20.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:20.580699   38829 type.go:168] "Request Body" body=""
	I1213 18:37:20.580768   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:20.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:21.080704   38829 type.go:168] "Request Body" body=""
	I1213 18:37:21.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:21.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:21.081200   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:21.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:37:21.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:21.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:22.080770   38829 type.go:168] "Request Body" body=""
	I1213 18:37:22.080878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:22.081249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:22.580823   38829 type.go:168] "Request Body" body=""
	I1213 18:37:22.580919   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:22.581227   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:23.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:37:23.080740   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:23.081069   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:23.580725   38829 type.go:168] "Request Body" body=""
	I1213 18:37:23.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:23.581144   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:23.581194   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:24.081109   38829 type.go:168] "Request Body" body=""
	I1213 18:37:24.081180   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:24.081522   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:24.581618   38829 type.go:168] "Request Body" body=""
	I1213 18:37:24.581687   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:24.582010   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:25.080756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:25.080839   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:25.081197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:25.580943   38829 type.go:168] "Request Body" body=""
	I1213 18:37:25.581038   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:25.581354   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:25.581416   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:26.080723   38829 type.go:168] "Request Body" body=""
	I1213 18:37:26.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:26.081227   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:26.580735   38829 type.go:168] "Request Body" body=""
	I1213 18:37:26.580817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:26.581160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:27.080700   38829 type.go:168] "Request Body" body=""
	I1213 18:37:27.080784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:27.081126   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:27.580667   38829 type.go:168] "Request Body" body=""
	I1213 18:37:27.580751   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:27.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:28.080604   38829 type.go:168] "Request Body" body=""
	I1213 18:37:28.080698   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:28.081045   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:28.081097   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:28.580817   38829 type.go:168] "Request Body" body=""
	I1213 18:37:28.580906   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:28.581222   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:29.080796   38829 type.go:168] "Request Body" body=""
	I1213 18:37:29.080873   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:29.081151   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:29.580777   38829 type.go:168] "Request Body" body=""
	I1213 18:37:29.580870   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:29.581199   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:30.080803   38829 type.go:168] "Request Body" body=""
	I1213 18:37:30.080884   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:30.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:30.081287   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:30.580672   38829 type.go:168] "Request Body" body=""
	I1213 18:37:30.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:30.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:31.081506   38829 type.go:168] "Request Body" body=""
	I1213 18:37:31.081581   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:31.081922   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:31.580645   38829 type.go:168] "Request Body" body=""
	I1213 18:37:31.580718   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:31.581102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:32.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:32.080783   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:32.081114   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:32.580825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:32.580936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:32.581248   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:32.581295   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:33.080746   38829 type.go:168] "Request Body" body=""
	I1213 18:37:33.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:33.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:33.580676   38829 type.go:168] "Request Body" body=""
	I1213 18:37:33.580750   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:33.581029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:34.081646   38829 type.go:168] "Request Body" body=""
	I1213 18:37:34.081715   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:34.082009   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:34.580682   38829 type.go:168] "Request Body" body=""
	I1213 18:37:34.580780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:34.581134   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:35.080825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:35.080895   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:35.081246   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:35.081298   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:35.580940   38829 type.go:168] "Request Body" body=""
	I1213 18:37:35.581051   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:35.581350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:35.813701   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 18:37:35.887144   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:35.887179   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:35.887279   38829 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 18:37:36.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:37:36.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:36.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:36.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:37:36.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:36.581058   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:37.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:37:37.080814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:37.081161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:37.580851   38829 type.go:168] "Request Body" body=""
	I1213 18:37:37.580926   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:37.581239   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:37.581288   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:38.080774   38829 type.go:168] "Request Body" body=""
	I1213 18:37:38.080865   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:38.081305   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:38.581237   38829 type.go:168] "Request Body" body=""
	I1213 18:37:38.581321   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:38.581645   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:39.081533   38829 type.go:168] "Request Body" body=""
	I1213 18:37:39.081612   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:39.081897   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:39.581503   38829 type.go:168] "Request Body" body=""
	I1213 18:37:39.581567   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:39.581828   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:39.581866   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:40.081636   38829 type.go:168] "Request Body" body=""
	I1213 18:37:40.081710   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:40.082035   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:40.580686   38829 type.go:168] "Request Body" body=""
	I1213 18:37:40.580764   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:40.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:41.080659   38829 type.go:168] "Request Body" body=""
	I1213 18:37:41.080744   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:41.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:41.580856   38829 type.go:168] "Request Body" body=""
	I1213 18:37:41.580929   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:41.581268   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:42.080912   38829 type.go:168] "Request Body" body=""
	I1213 18:37:42.081054   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:42.081405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:42.081473   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:42.581188   38829 type.go:168] "Request Body" body=""
	I1213 18:37:42.581268   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:42.581539   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:43.081397   38829 type.go:168] "Request Body" body=""
	I1213 18:37:43.081474   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:43.081823   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:43.581624   38829 type.go:168] "Request Body" body=""
	I1213 18:37:43.581704   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:43.582019   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:44.081168   38829 type.go:168] "Request Body" body=""
	I1213 18:37:44.081243   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:44.081539   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:44.081581   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:44.581405   38829 type.go:168] "Request Body" body=""
	I1213 18:37:44.581481   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:44.581805   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:45.081836   38829 type.go:168] "Request Body" body=""
	I1213 18:37:45.081938   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:45.082358   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:45.580699   38829 type.go:168] "Request Body" body=""
	I1213 18:37:45.580773   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:45.581090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:46.080825   38829 type.go:168] "Request Body" body=""
	I1213 18:37:46.080898   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:46.081231   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:46.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:37:46.580818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:46.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:46.581235   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:47.080684   38829 type.go:168] "Request Body" body=""
	I1213 18:37:47.080759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:47.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:47.580848   38829 type.go:168] "Request Body" body=""
	I1213 18:37:47.580921   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:47.581277   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:48.080712   38829 type.go:168] "Request Body" body=""
	I1213 18:37:48.080804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:48.081135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:48.580811   38829 type.go:168] "Request Body" body=""
	I1213 18:37:48.580882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:48.581154   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:49.081058   38829 type.go:168] "Request Body" body=""
	I1213 18:37:49.081150   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:49.081477   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:49.081542   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:49.581293   38829 type.go:168] "Request Body" body=""
	I1213 18:37:49.581370   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:49.581713   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:50.081496   38829 type.go:168] "Request Body" body=""
	I1213 18:37:50.081562   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:50.081847   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:50.581629   38829 type.go:168] "Request Body" body=""
	I1213 18:37:50.581706   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:50.582071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:51.080700   38829 type.go:168] "Request Body" body=""
	I1213 18:37:51.080790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:51.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:51.580683   38829 type.go:168] "Request Body" body=""
	I1213 18:37:51.580754   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:51.581047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:51.581094   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:52.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:37:52.080787   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:52.081175   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:52.580775   38829 type.go:168] "Request Body" body=""
	I1213 18:37:52.580867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:52.581254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:52.612466   38829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 18:37:52.672905   38829 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:52.677070   38829 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 18:37:52.677165   38829 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 18:37:52.680309   38829 out.go:179] * Enabled addons: 
	I1213 18:37:52.684021   38829 addons.go:530] duration metric: took 1m54.470472162s for enable addons: enabled=[]
	I1213 18:37:53.081534   38829 type.go:168] "Request Body" body=""
	I1213 18:37:53.081600   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:53.081904   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:53.580635   38829 type.go:168] "Request Body" body=""
	I1213 18:37:53.580711   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:53.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:54.080643   38829 type.go:168] "Request Body" body=""
	I1213 18:37:54.080739   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:54.082029   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1213 18:37:54.082091   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:54.581623   38829 type.go:168] "Request Body" body=""
	I1213 18:37:54.581698   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:54.581957   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:55.080687   38829 type.go:168] "Request Body" body=""
	I1213 18:37:55.080780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:55.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:55.580756   38829 type.go:168] "Request Body" body=""
	I1213 18:37:55.580828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:55.581197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:56.080640   38829 type.go:168] "Request Body" body=""
	I1213 18:37:56.080714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:56.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:56.580613   38829 type.go:168] "Request Body" body=""
	I1213 18:37:56.580689   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:56.581045   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:56.581101   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:57.080597   38829 type.go:168] "Request Body" body=""
	I1213 18:37:57.080691   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:57.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:57.580930   38829 type.go:168] "Request Body" body=""
	I1213 18:37:57.581038   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:57.585714   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 18:37:58.081512   38829 type.go:168] "Request Body" body=""
	I1213 18:37:58.081591   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:58.081945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:58.580703   38829 type.go:168] "Request Body" body=""
	I1213 18:37:58.580778   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:58.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:37:58.581214   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:37:59.081515   38829 type.go:168] "Request Body" body=""
	I1213 18:37:59.081606   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:59.081931   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:37:59.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:37:59.580732   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:37:59.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:00.080803   38829 type.go:168] "Request Body" body=""
	I1213 18:38:00.080888   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:00.081237   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:00.581619   38829 type.go:168] "Request Body" body=""
	I1213 18:38:00.581690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:00.582027   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:00.582084   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:01.080751   38829 type.go:168] "Request Body" body=""
	I1213 18:38:01.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:01.081194   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:01.580724   38829 type.go:168] "Request Body" body=""
	I1213 18:38:01.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:01.581152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:02.080668   38829 type.go:168] "Request Body" body=""
	I1213 18:38:02.080746   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:02.081102   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:02.580776   38829 type.go:168] "Request Body" body=""
	I1213 18:38:02.580850   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:02.581187   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:03.080936   38829 type.go:168] "Request Body" body=""
	I1213 18:38:03.081031   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:03.081349   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:03.081405   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:03.580669   38829 type.go:168] "Request Body" body=""
	I1213 18:38:03.580767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:03.581056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:04.080818   38829 type.go:168] "Request Body" body=""
	I1213 18:38:04.080899   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:04.081235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:04.580930   38829 type.go:168] "Request Body" body=""
	I1213 18:38:04.581025   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:04.581369   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:05.080659   38829 type.go:168] "Request Body" body=""
	I1213 18:38:05.080743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:05.081076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:05.580757   38829 type.go:168] "Request Body" body=""
	I1213 18:38:05.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:05.581176   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:05.581227   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:06.080773   38829 type.go:168] "Request Body" body=""
	I1213 18:38:06.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:06.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:06.580678   38829 type.go:168] "Request Body" body=""
	I1213 18:38:06.580751   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:06.581040   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:07.080776   38829 type.go:168] "Request Body" body=""
	I1213 18:38:07.080848   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:07.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:07.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:07.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:07.581160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:08.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:38:08.080849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:08.081161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:08.081226   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:08.580947   38829 type.go:168] "Request Body" body=""
	I1213 18:38:08.581044   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:08.581405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:09.081557   38829 type.go:168] "Request Body" body=""
	I1213 18:38:09.081630   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:09.081955   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:09.580701   38829 type.go:168] "Request Body" body=""
	I1213 18:38:09.580777   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:09.581100   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:10.080747   38829 type.go:168] "Request Body" body=""
	I1213 18:38:10.080835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:10.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:10.081288   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:10.580771   38829 type.go:168] "Request Body" body=""
	I1213 18:38:10.580886   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:10.581218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:11.080922   38829 type.go:168] "Request Body" body=""
	I1213 18:38:11.080992   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:11.081274   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:11.581973   38829 type.go:168] "Request Body" body=""
	I1213 18:38:11.582052   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:11.582377   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:12.081104   38829 type.go:168] "Request Body" body=""
	I1213 18:38:12.081179   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:12.081532   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:12.081585   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:12.581355   38829 type.go:168] "Request Body" body=""
	I1213 18:38:12.581430   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:12.581762   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:13.081529   38829 type.go:168] "Request Body" body=""
	I1213 18:38:13.081604   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:13.081921   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:13.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:38:13.580716   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:13.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:14.081616   38829 type.go:168] "Request Body" body=""
	I1213 18:38:14.081703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:14.082037   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:14.082090   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:14.580727   38829 type.go:168] "Request Body" body=""
	I1213 18:38:14.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:14.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:15.080903   38829 type.go:168] "Request Body" body=""
	I1213 18:38:15.080982   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:15.081338   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:15.581041   38829 type.go:168] "Request Body" body=""
	I1213 18:38:15.581119   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:15.581474   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:16.081265   38829 type.go:168] "Request Body" body=""
	I1213 18:38:16.081338   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:16.081665   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:16.581493   38829 type.go:168] "Request Body" body=""
	I1213 18:38:16.581589   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:16.581945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:16.581999   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:17.080642   38829 type.go:168] "Request Body" body=""
	I1213 18:38:17.080713   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:17.080986   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:17.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:38:17.580796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:17.581138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:18.080868   38829 type.go:168] "Request Body" body=""
	I1213 18:38:18.080948   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:18.081331   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:18.581194   38829 type.go:168] "Request Body" body=""
	I1213 18:38:18.581268   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:18.581529   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:19.081522   38829 type.go:168] "Request Body" body=""
	I1213 18:38:19.081598   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:19.081945   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:19.082001   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:19.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:38:19.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:19.581171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:20.080873   38829 type.go:168] "Request Body" body=""
	I1213 18:38:20.080948   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:20.081259   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:20.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:38:20.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:20.581178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:21.080749   38829 type.go:168] "Request Body" body=""
	I1213 18:38:21.080849   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:21.081219   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:21.580655   38829 type.go:168] "Request Body" body=""
	I1213 18:38:21.580730   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:21.581101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:21.581180   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:22.080740   38829 type.go:168] "Request Body" body=""
	I1213 18:38:22.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:22.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:22.580922   38829 type.go:168] "Request Body" body=""
	I1213 18:38:22.581020   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:22.581389   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:23.080725   38829 type.go:168] "Request Body" body=""
	I1213 18:38:23.080802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:23.081145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:23.580880   38829 type.go:168] "Request Body" body=""
	I1213 18:38:23.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:23.581338   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:23.581392   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:24.081664   38829 type.go:168] "Request Body" body=""
	I1213 18:38:24.081759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:24.082117   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:24.580825   38829 type.go:168] "Request Body" body=""
	I1213 18:38:24.580901   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:24.581233   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:25.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:25.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:25.081203   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:25.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:38:25.580807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:25.581142   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:26.080689   38829 type.go:168] "Request Body" body=""
	I1213 18:38:26.080779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:26.081103   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:26.081156   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:26.580750   38829 type.go:168] "Request Body" body=""
	I1213 18:38:26.580831   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:26.581177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:27.080736   38829 type.go:168] "Request Body" body=""
	I1213 18:38:27.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:27.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:27.580696   38829 type.go:168] "Request Body" body=""
	I1213 18:38:27.580770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:27.581094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:28.080768   38829 type.go:168] "Request Body" body=""
	I1213 18:38:28.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:28.081147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:28.081197   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:28.581180   38829 type.go:168] "Request Body" body=""
	I1213 18:38:28.581274   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:28.581646   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:29.080821   38829 type.go:168] "Request Body" body=""
	I1213 18:38:29.080892   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:29.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:29.580951   38829 type.go:168] "Request Body" body=""
	I1213 18:38:29.581053   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:29.581390   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:30.080799   38829 type.go:168] "Request Body" body=""
	I1213 18:38:30.080882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:30.081350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:30.081432   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:30.580706   38829 type.go:168] "Request Body" body=""
	I1213 18:38:30.580834   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:30.581124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:31.080774   38829 type.go:168] "Request Body" body=""
	I1213 18:38:31.080864   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:31.081259   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:31.580984   38829 type.go:168] "Request Body" body=""
	I1213 18:38:31.581082   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:31.581450   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:32.080667   38829 type.go:168] "Request Body" body=""
	I1213 18:38:32.080743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:32.081034   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:32.580743   38829 type.go:168] "Request Body" body=""
	I1213 18:38:32.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:32.581200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:32.581255   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:33.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:38:33.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:33.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:33.580725   38829 type.go:168] "Request Body" body=""
	I1213 18:38:33.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:33.581164   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:34.081257   38829 type.go:168] "Request Body" body=""
	I1213 18:38:34.081337   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:34.081668   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:34.581504   38829 type.go:168] "Request Body" body=""
	I1213 18:38:34.581582   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:34.581919   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:34.581974   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:35.080651   38829 type.go:168] "Request Body" body=""
	I1213 18:38:35.080731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:35.081024   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:35.580713   38829 type.go:168] "Request Body" body=""
	I1213 18:38:35.580792   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:35.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:36.080919   38829 type.go:168] "Request Body" body=""
	I1213 18:38:36.080998   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:36.081335   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:36.580681   38829 type.go:168] "Request Body" body=""
	I1213 18:38:36.580752   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:36.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:37.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:38:37.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:37.081165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:37.081218   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:37.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:38:37.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:37.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:38.080691   38829 type.go:168] "Request Body" body=""
	I1213 18:38:38.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:38.081186   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:38.581125   38829 type.go:168] "Request Body" body=""
	I1213 18:38:38.581202   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:38.581601   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:39.081372   38829 type.go:168] "Request Body" body=""
	I1213 18:38:39.081450   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:39.081746   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:39.081795   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:39.581476   38829 type.go:168] "Request Body" body=""
	I1213 18:38:39.581574   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:39.581834   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:40.080652   38829 type.go:168] "Request Body" body=""
	I1213 18:38:40.080736   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:40.081070   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:40.580762   38829 type.go:168] "Request Body" body=""
	I1213 18:38:40.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:40.581170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:41.080790   38829 type.go:168] "Request Body" body=""
	I1213 18:38:41.080859   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:41.081138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:41.580736   38829 type.go:168] "Request Body" body=""
	I1213 18:38:41.580815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:41.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:41.581213   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:42.081232   38829 type.go:168] "Request Body" body=""
	I1213 18:38:42.081358   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:42.081865   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:42.580689   38829 type.go:168] "Request Body" body=""
	I1213 18:38:42.580771   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:42.581121   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:43.080823   38829 type.go:168] "Request Body" body=""
	I1213 18:38:43.080907   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:43.081225   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:43.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:38:43.580836   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:43.581158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:44.081575   38829 type.go:168] "Request Body" body=""
	I1213 18:38:44.081651   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:44.081974   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:44.082018   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:44.580749   38829 type.go:168] "Request Body" body=""
	I1213 18:38:44.580850   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:44.581196   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:45.080840   38829 type.go:168] "Request Body" body=""
	I1213 18:38:45.080920   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:45.081286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:45.580954   38829 type.go:168] "Request Body" body=""
	I1213 18:38:45.581055   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:45.581346   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:46.081059   38829 type.go:168] "Request Body" body=""
	I1213 18:38:46.081132   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:46.081421   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:46.581118   38829 type.go:168] "Request Body" body=""
	I1213 18:38:46.581200   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:46.581535   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:46.581590   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:47.081106   38829 type.go:168] "Request Body" body=""
	I1213 18:38:47.081224   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:47.081480   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:47.581264   38829 type.go:168] "Request Body" body=""
	I1213 18:38:47.581336   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:47.581677   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:48.081348   38829 type.go:168] "Request Body" body=""
	I1213 18:38:48.081420   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:48.081786   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:48.580712   38829 type.go:168] "Request Body" body=""
	I1213 18:38:48.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:48.581132   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:49.081267   38829 type.go:168] "Request Body" body=""
	I1213 18:38:49.081338   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:49.081661   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:49.081719   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:49.581307   38829 type.go:168] "Request Body" body=""
	I1213 18:38:49.581390   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:49.581723   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:50.081491   38829 type.go:168] "Request Body" body=""
	I1213 18:38:50.081558   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:50.081836   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:50.581617   38829 type.go:168] "Request Body" body=""
	I1213 18:38:50.581690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:50.582006   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:51.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:51.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:51.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:51.580635   38829 type.go:168] "Request Body" body=""
	I1213 18:38:51.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:51.581040   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:51.581092   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:52.080731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:52.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:52.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:52.580897   38829 type.go:168] "Request Body" body=""
	I1213 18:38:52.580975   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:52.581319   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:53.081002   38829 type.go:168] "Request Body" body=""
	I1213 18:38:53.081090   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:53.081366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:53.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:38:53.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:53.581210   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:53.581264   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:54.081117   38829 type.go:168] "Request Body" body=""
	I1213 18:38:54.081197   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:54.081547   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:54.581298   38829 type.go:168] "Request Body" body=""
	I1213 18:38:54.581371   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:54.581643   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:55.081403   38829 type.go:168] "Request Body" body=""
	I1213 18:38:55.081482   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:55.081842   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:55.581455   38829 type.go:168] "Request Body" body=""
	I1213 18:38:55.581534   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:55.581851   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:55.581906   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:56.080602   38829 type.go:168] "Request Body" body=""
	I1213 18:38:56.080680   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:56.081049   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:56.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:38:56.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:56.581197   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:57.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:38:57.080844   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:57.081204   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:57.580625   38829 type.go:168] "Request Body" body=""
	I1213 18:38:57.580703   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:57.580967   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:58.080697   38829 type.go:168] "Request Body" body=""
	I1213 18:38:58.080767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:58.081073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:38:58.081121   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:38:58.580746   38829 type.go:168] "Request Body" body=""
	I1213 18:38:58.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:58.581193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:59.080619   38829 type.go:168] "Request Body" body=""
	I1213 18:38:59.080690   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:59.080957   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:38:59.580697   38829 type.go:168] "Request Body" body=""
	I1213 18:38:59.580775   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:38:59.581075   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:00.080781   38829 type.go:168] "Request Body" body=""
	I1213 18:39:00.080864   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:00.081214   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:00.081263   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:00.580868   38829 type.go:168] "Request Body" body=""
	I1213 18:39:00.580959   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:00.581261   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:01.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:39:01.080795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:01.081160   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:01.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:01.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:01.581212   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:02.080885   38829 type.go:168] "Request Body" body=""
	I1213 18:39:02.080961   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:02.081256   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:02.081306   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:02.580741   38829 type.go:168] "Request Body" body=""
	I1213 18:39:02.580818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:02.581177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:03.080736   38829 type.go:168] "Request Body" body=""
	I1213 18:39:03.080810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:03.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:03.580700   38829 type.go:168] "Request Body" body=""
	I1213 18:39:03.580773   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:03.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:04.080632   38829 type.go:168] "Request Body" body=""
	I1213 18:39:04.080714   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:04.081077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:04.580778   38829 type.go:168] "Request Body" body=""
	I1213 18:39:04.580863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:04.581243   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:04.581303   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:05.080687   38829 type.go:168] "Request Body" body=""
	I1213 18:39:05.080765   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:05.081059   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:05.580796   38829 type.go:168] "Request Body" body=""
	I1213 18:39:05.580872   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:05.581215   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:06.080727   38829 type.go:168] "Request Body" body=""
	I1213 18:39:06.080803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:06.081158   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:06.580837   38829 type.go:168] "Request Body" body=""
	I1213 18:39:06.580917   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:06.581202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:07.080725   38829 type.go:168] "Request Body" body=""
	I1213 18:39:07.080808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:07.081164   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:07.081214   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:07.580716   38829 type.go:168] "Request Body" body=""
	I1213 18:39:07.580794   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:07.581129   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:08.080858   38829 type.go:168] "Request Body" body=""
	I1213 18:39:08.080931   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:08.081213   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:08.581137   38829 type.go:168] "Request Body" body=""
	I1213 18:39:08.581207   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:08.581513   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:09.081065   38829 type.go:168] "Request Body" body=""
	I1213 18:39:09.081139   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:09.081514   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:09.081581   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:09.581276   38829 type.go:168] "Request Body" body=""
	I1213 18:39:09.581342   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:09.581644   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:10.081407   38829 type.go:168] "Request Body" body=""
	I1213 18:39:10.081483   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:10.081851   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:10.581496   38829 type.go:168] "Request Body" body=""
	I1213 18:39:10.581567   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:10.581887   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:11.080629   38829 type.go:168] "Request Body" body=""
	I1213 18:39:11.080701   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:11.081001   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:11.580726   38829 type.go:168] "Request Body" body=""
	I1213 18:39:11.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:11.581121   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:11.581171   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:12.080760   38829 type.go:168] "Request Body" body=""
	I1213 18:39:12.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:12.081152   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:12.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:39:12.580744   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:12.581068   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:13.080734   38829 type.go:168] "Request Body" body=""
	I1213 18:39:13.080808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:13.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:13.580863   38829 type.go:168] "Request Body" body=""
	I1213 18:39:13.580937   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:13.581281   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:13.581332   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:14.081577   38829 type.go:168] "Request Body" body=""
	I1213 18:39:14.081653   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:14.081950   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:14.580638   38829 type.go:168] "Request Body" body=""
	I1213 18:39:14.580713   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:14.581046   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:15.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:39:15.080825   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:15.081191   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:15.580864   38829 type.go:168] "Request Body" body=""
	I1213 18:39:15.580936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:15.581210   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:16.080732   38829 type.go:168] "Request Body" body=""
	I1213 18:39:16.080807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:16.081171   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:16.081237   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:16.580894   38829 type.go:168] "Request Body" body=""
	I1213 18:39:16.580969   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:16.581301   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:17.080988   38829 type.go:168] "Request Body" body=""
	I1213 18:39:17.081089   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:17.081420   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:17.580765   38829 type.go:168] "Request Body" body=""
	I1213 18:39:17.580844   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:17.581202   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:18.080887   38829 type.go:168] "Request Body" body=""
	I1213 18:39:18.080962   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:18.081285   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:18.081330   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:18.581099   38829 type.go:168] "Request Body" body=""
	I1213 18:39:18.581170   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:18.581423   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:19.081384   38829 type.go:168] "Request Body" body=""
	I1213 18:39:19.081453   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:19.081768   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:19.581414   38829 type.go:168] "Request Body" body=""
	I1213 18:39:19.581490   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:19.581786   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:20.081602   38829 type.go:168] "Request Body" body=""
	I1213 18:39:20.081678   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:20.081965   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:20.082018   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:20.580679   38829 type.go:168] "Request Body" body=""
	I1213 18:39:20.580788   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:20.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:21.080703   38829 type.go:168] "Request Body" body=""
	I1213 18:39:21.080796   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:21.081146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:21.580784   38829 type.go:168] "Request Body" body=""
	I1213 18:39:21.580863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:21.581224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:22.080782   38829 type.go:168] "Request Body" body=""
	I1213 18:39:22.080855   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:22.081300   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:22.580762   38829 type.go:168] "Request Body" body=""
	I1213 18:39:22.580835   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:22.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:22.581194   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:23.080788   38829 type.go:168] "Request Body" body=""
	I1213 18:39:23.080860   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:23.081193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:23.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:39:23.580820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:23.581147   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:24.081435   38829 type.go:168] "Request Body" body=""
	I1213 18:39:24.081530   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:24.081884   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:24.581587   38829 type.go:168] "Request Body" body=""
	I1213 18:39:24.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:24.581912   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:24.581951   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:25.080657   38829 type.go:168] "Request Body" body=""
	I1213 18:39:25.080734   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:25.081179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:25.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:39:25.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:25.581190   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:26.080869   38829 type.go:168] "Request Body" body=""
	I1213 18:39:26.080936   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:26.081224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:26.580741   38829 type.go:168] "Request Body" body=""
	I1213 18:39:26.580814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:26.581148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:27.080703   38829 type.go:168] "Request Body" body=""
	I1213 18:39:27.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:27.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:27.081165   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:27.580724   38829 type.go:168] "Request Body" body=""
	I1213 18:39:27.580797   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:27.581139   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:28.080722   38829 type.go:168] "Request Body" body=""
	I1213 18:39:28.080793   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:28.081199   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:28.580834   38829 type.go:168] "Request Body" body=""
	I1213 18:39:28.580915   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:28.581280   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:29.081285   38829 type.go:168] "Request Body" body=""
	I1213 18:39:29.081351   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:29.081628   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:29.081672   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:29.581065   38829 type.go:168] "Request Body" body=""
	I1213 18:39:29.581140   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:29.581481   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:30.081344   38829 type.go:168] "Request Body" body=""
	I1213 18:39:30.081439   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:30.081896   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:30.580671   38829 type.go:168] "Request Body" body=""
	I1213 18:39:30.580748   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:30.581066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:31.080743   38829 type.go:168] "Request Body" body=""
	I1213 18:39:31.080834   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:31.081162   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:31.580866   38829 type.go:168] "Request Body" body=""
	I1213 18:39:31.580942   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:31.581337   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:31.581394   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:32.080782   38829 type.go:168] "Request Body" body=""
	I1213 18:39:32.080853   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:32.081134   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:32.580755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:32.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:32.581200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:33.080901   38829 type.go:168] "Request Body" body=""
	I1213 18:39:33.080972   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:33.081318   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:33.580802   38829 type.go:168] "Request Body" body=""
	I1213 18:39:33.580878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:33.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:34.080872   38829 type.go:168] "Request Body" body=""
	I1213 18:39:34.080943   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:34.081303   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:34.081358   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:34.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:39:34.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:34.581136   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:35.080815   38829 type.go:168] "Request Body" body=""
	I1213 18:39:35.080883   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:35.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:35.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:39:35.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:35.581133   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:36.080735   38829 type.go:168] "Request Body" body=""
	I1213 18:39:36.080809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:36.081172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:36.580859   38829 type.go:168] "Request Body" body=""
	I1213 18:39:36.580941   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:36.581223   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:36.581264   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:37.080720   38829 type.go:168] "Request Body" body=""
	I1213 18:39:37.080813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:37.081267   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:37.580761   38829 type.go:168] "Request Body" body=""
	I1213 18:39:37.580833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:37.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:38.080809   38829 type.go:168] "Request Body" body=""
	I1213 18:39:38.080881   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:38.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:38.581160   38829 type.go:168] "Request Body" body=""
	I1213 18:39:38.581229   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:38.581546   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:38.581608   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:39.081316   38829 type.go:168] "Request Body" body=""
	I1213 18:39:39.081387   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:39.081699   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:39.581307   38829 type.go:168] "Request Body" body=""
	I1213 18:39:39.581382   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:39.581710   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:40.081503   38829 type.go:168] "Request Body" body=""
	I1213 18:39:40.081578   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:40.081882   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:40.581632   38829 type.go:168] "Request Body" body=""
	I1213 18:39:40.581730   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:40.582090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:40.582139   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:41.080640   38829 type.go:168] "Request Body" body=""
	I1213 18:39:41.080710   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:41.081046   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:41.580670   38829 type.go:168] "Request Body" body=""
	I1213 18:39:41.580748   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:41.581076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:42.080797   38829 type.go:168] "Request Body" body=""
	I1213 18:39:42.080878   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:42.081282   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:42.580711   38829 type.go:168] "Request Body" body=""
	I1213 18:39:42.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:42.581132   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:43.080747   38829 type.go:168] "Request Body" body=""
	I1213 18:39:43.080819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:43.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:43.081283   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:43.580965   38829 type.go:168] "Request Body" body=""
	I1213 18:39:43.581057   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:43.581416   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:44.081437   38829 type.go:168] "Request Body" body=""
	I1213 18:39:44.081507   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:44.081776   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:44.581633   38829 type.go:168] "Request Body" body=""
	I1213 18:39:44.581707   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:44.582020   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:45.080770   38829 type.go:168] "Request Body" body=""
	I1213 18:39:45.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:45.081375   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:45.081434   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:45.581089   38829 type.go:168] "Request Body" body=""
	I1213 18:39:45.581158   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:45.581469   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:46.080755   38829 type.go:168] "Request Body" body=""
	I1213 18:39:46.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:46.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:46.580794   38829 type.go:168] "Request Body" body=""
	I1213 18:39:46.580865   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:46.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:47.080689   38829 type.go:168] "Request Body" body=""
	I1213 18:39:47.080768   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:47.081094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:47.580669   38829 type.go:168] "Request Body" body=""
	I1213 18:39:47.580763   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:47.581109   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:47.581164   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:48.080848   38829 type.go:168] "Request Body" body=""
	I1213 18:39:48.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:48.081228   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:48.581237   38829 type.go:168] "Request Body" body=""
	I1213 18:39:48.581311   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:48.581637   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:49.081081   38829 type.go:168] "Request Body" body=""
	I1213 18:39:49.081164   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:49.081471   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:49.581258   38829 type.go:168] "Request Body" body=""
	I1213 18:39:49.581336   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:49.581617   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:49.581664   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:50.081346   38829 type.go:168] "Request Body" body=""
	I1213 18:39:50.081416   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:50.081693   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:50.581552   38829 type.go:168] "Request Body" body=""
	I1213 18:39:50.581621   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:50.581942   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:51.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:39:51.080806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:51.081235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:51.580885   38829 type.go:168] "Request Body" body=""
	I1213 18:39:51.580958   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:51.581315   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:52.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:39:52.080811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:52.081193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:52.081249   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:52.580704   38829 type.go:168] "Request Body" body=""
	I1213 18:39:52.580784   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:52.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:53.080692   38829 type.go:168] "Request Body" body=""
	I1213 18:39:53.080761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:53.081060   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:53.580744   38829 type.go:168] "Request Body" body=""
	I1213 18:39:53.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:53.581232   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:54.081089   38829 type.go:168] "Request Body" body=""
	I1213 18:39:54.081164   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:54.081658   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:54.081712   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:54.581346   38829 type.go:168] "Request Body" body=""
	I1213 18:39:54.581418   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:54.581673   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:55.081499   38829 type.go:168] "Request Body" body=""
	I1213 18:39:55.081596   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:55.081941   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:55.580685   38829 type.go:168] "Request Body" body=""
	I1213 18:39:55.580777   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:55.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:56.080674   38829 type.go:168] "Request Body" body=""
	I1213 18:39:56.080750   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:56.081047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:56.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:39:56.580778   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:56.581204   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:56.581262   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:57.080917   38829 type.go:168] "Request Body" body=""
	I1213 18:39:57.081002   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:57.081366   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:57.580664   38829 type.go:168] "Request Body" body=""
	I1213 18:39:57.580745   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:57.581033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:58.081028   38829 type.go:168] "Request Body" body=""
	I1213 18:39:58.081122   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:58.081478   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:58.581557   38829 type.go:168] "Request Body" body=""
	I1213 18:39:58.581639   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:58.582001   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:39:58.582075   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:39:59.081358   38829 type.go:168] "Request Body" body=""
	I1213 18:39:59.081453   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:59.081774   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:39:59.581595   38829 type.go:168] "Request Body" body=""
	I1213 18:39:59.581667   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:39:59.581967   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:00.080718   38829 type.go:168] "Request Body" body=""
	I1213 18:40:00.080803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:00.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:00.582760   38829 type.go:168] "Request Body" body=""
	I1213 18:40:00.582857   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:00.583187   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:00.583244   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:01.080684   38829 type.go:168] "Request Body" body=""
	I1213 18:40:01.080755   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:01.081087   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:01.580820   38829 type.go:168] "Request Body" body=""
	I1213 18:40:01.580895   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:01.581240   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:02.080921   38829 type.go:168] "Request Body" body=""
	I1213 18:40:02.080993   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:02.081270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:02.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:02.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:02.581172   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:03.080880   38829 type.go:168] "Request Body" body=""
	I1213 18:40:03.080955   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:03.081306   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:03.081361   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:03.580996   38829 type.go:168] "Request Body" body=""
	I1213 18:40:03.581076   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:03.581335   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:04.080737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:04.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:04.081183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:04.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:04.580808   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:04.581149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:05.080850   38829 type.go:168] "Request Body" body=""
	I1213 18:40:05.080927   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:05.081263   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:05.580963   38829 type.go:168] "Request Body" body=""
	I1213 18:40:05.581056   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:05.581401   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:05.581460   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:06.081245   38829 type.go:168] "Request Body" body=""
	I1213 18:40:06.081316   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:06.081669   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:06.581426   38829 type.go:168] "Request Body" body=""
	I1213 18:40:06.581509   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:06.581848   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:07.081645   38829 type.go:168] "Request Body" body=""
	I1213 18:40:07.081722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:07.082062   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:07.580728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:07.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:07.581162   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:08.080728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:08.080798   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:08.081088   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:08.081131   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:08.580917   38829 type.go:168] "Request Body" body=""
	I1213 18:40:08.580997   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:08.581369   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:09.081067   38829 type.go:168] "Request Body" body=""
	I1213 18:40:09.081141   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:09.081470   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:09.581192   38829 type.go:168] "Request Body" body=""
	I1213 18:40:09.581258   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:09.581523   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:10.081376   38829 type.go:168] "Request Body" body=""
	I1213 18:40:10.081454   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:10.081809   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:10.081865   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:10.581615   38829 type.go:168] "Request Body" body=""
	I1213 18:40:10.581696   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:10.582036   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:11.080690   38829 type.go:168] "Request Body" body=""
	I1213 18:40:11.080762   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:11.081125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:11.580814   38829 type.go:168] "Request Body" body=""
	I1213 18:40:11.580891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:11.581233   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:12.080745   38829 type.go:168] "Request Body" body=""
	I1213 18:40:12.080820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:12.081174   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:12.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:40:12.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:12.581118   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:12.581177   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:13.080870   38829 type.go:168] "Request Body" body=""
	I1213 18:40:13.080953   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:13.081298   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:13.580990   38829 type.go:168] "Request Body" body=""
	I1213 18:40:13.581130   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:13.581452   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:14.081563   38829 type.go:168] "Request Body" body=""
	I1213 18:40:14.081631   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:14.081949   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:14.580642   38829 type.go:168] "Request Body" body=""
	I1213 18:40:14.580724   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:14.581092   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:15.080672   38829 type.go:168] "Request Body" body=""
	I1213 18:40:15.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:15.081138   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:15.081197   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:15.580905   38829 type.go:168] "Request Body" body=""
	I1213 18:40:15.580977   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:15.581270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:16.080728   38829 type.go:168] "Request Body" body=""
	I1213 18:40:16.080801   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:16.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:16.580745   38829 type.go:168] "Request Body" body=""
	I1213 18:40:16.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:16.581182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:17.080854   38829 type.go:168] "Request Body" body=""
	I1213 18:40:17.080925   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:17.081196   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:17.081236   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:17.580885   38829 type.go:168] "Request Body" body=""
	I1213 18:40:17.580960   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:17.581311   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:18.081048   38829 type.go:168] "Request Body" body=""
	I1213 18:40:18.081128   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:18.081456   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:18.581421   38829 type.go:168] "Request Body" body=""
	I1213 18:40:18.581495   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:18.581752   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:19.081269   38829 type.go:168] "Request Body" body=""
	I1213 18:40:19.081345   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:19.081667   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:19.081723   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:19.581465   38829 type.go:168] "Request Body" body=""
	I1213 18:40:19.581546   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:19.581834   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:20.081620   38829 type.go:168] "Request Body" body=""
	I1213 18:40:20.081707   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:20.082023   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:20.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:40:20.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:20.581185   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:21.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:40:21.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:21.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:21.580880   38829 type.go:168] "Request Body" body=""
	I1213 18:40:21.580954   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:21.581229   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:21.581273   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:22.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:40:22.080802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:22.081186   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:22.580892   38829 type.go:168] "Request Body" body=""
	I1213 18:40:22.580971   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:22.581314   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:23.080852   38829 type.go:168] "Request Body" body=""
	I1213 18:40:23.080921   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:23.081254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:23.580738   38829 type.go:168] "Request Body" body=""
	I1213 18:40:23.580816   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:23.581213   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:24.080992   38829 type.go:168] "Request Body" body=""
	I1213 18:40:24.081086   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:24.081439   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:24.081493   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:24.581181   38829 type.go:168] "Request Body" body=""
	I1213 18:40:24.581254   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:24.581518   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:25.081519   38829 type.go:168] "Request Body" body=""
	I1213 18:40:25.081638   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:25.082066   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:25.580956   38829 type.go:168] "Request Body" body=""
	I1213 18:40:25.581049   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:25.581403   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:26.081103   38829 type.go:168] "Request Body" body=""
	I1213 18:40:26.081188   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:26.081496   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:26.081544   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:26.581271   38829 type.go:168] "Request Body" body=""
	I1213 18:40:26.581346   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:26.581679   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:27.081463   38829 type.go:168] "Request Body" body=""
	I1213 18:40:27.081544   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:27.081845   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:27.581582   38829 type.go:168] "Request Body" body=""
	I1213 18:40:27.581657   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:27.581970   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:28.080670   38829 type.go:168] "Request Body" body=""
	I1213 18:40:28.080746   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:28.081095   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:28.580759   38829 type.go:168] "Request Body" body=""
	I1213 18:40:28.580833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:28.581189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:28.581244   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:29.080966   38829 type.go:168] "Request Body" body=""
	I1213 18:40:29.081057   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:29.081325   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:29.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:29.580809   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:29.581235   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:30.080981   38829 type.go:168] "Request Body" body=""
	I1213 18:40:30.081106   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:30.081499   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:30.581288   38829 type.go:168] "Request Body" body=""
	I1213 18:40:30.581365   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:30.581686   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:30.581744   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:31.081563   38829 type.go:168] "Request Body" body=""
	I1213 18:40:31.081643   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:31.081985   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:31.580733   38829 type.go:168] "Request Body" body=""
	I1213 18:40:31.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:31.581128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:32.080686   38829 type.go:168] "Request Body" body=""
	I1213 18:40:32.080759   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:32.081089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:32.580719   38829 type.go:168] "Request Body" body=""
	I1213 18:40:32.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:32.581153   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:33.080697   38829 type.go:168] "Request Body" body=""
	I1213 18:40:33.080771   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:33.081078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:33.081125   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:33.580695   38829 type.go:168] "Request Body" body=""
	I1213 18:40:33.580776   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:33.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:34.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:40:34.080785   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:34.081116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:34.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:34.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:34.581135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:35.080858   38829 type.go:168] "Request Body" body=""
	I1213 18:40:35.080940   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:35.081258   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:35.081316   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:35.580736   38829 type.go:168] "Request Body" body=""
	I1213 18:40:35.580819   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:35.581180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:36.080905   38829 type.go:168] "Request Body" body=""
	I1213 18:40:36.080982   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:36.081405   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:36.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:40:36.580780   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:36.581071   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:37.080758   38829 type.go:168] "Request Body" body=""
	I1213 18:40:37.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:37.081177   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:37.580742   38829 type.go:168] "Request Body" body=""
	I1213 18:40:37.580822   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:37.581185   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:37.581240   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:38.080845   38829 type.go:168] "Request Body" body=""
	I1213 18:40:38.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:38.081284   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:38.580992   38829 type.go:168] "Request Body" body=""
	I1213 18:40:38.581079   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:38.581427   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:39.081037   38829 type.go:168] "Request Body" body=""
	I1213 18:40:39.081109   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:39.081425   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:39.580691   38829 type.go:168] "Request Body" body=""
	I1213 18:40:39.580779   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:39.581096   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:40.080864   38829 type.go:168] "Request Body" body=""
	I1213 18:40:40.080952   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:40.081316   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:40.081370   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:40.581072   38829 type.go:168] "Request Body" body=""
	I1213 18:40:40.581147   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:40.581455   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:41.080649   38829 type.go:168] "Request Body" body=""
	I1213 18:40:41.080720   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:41.080968   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:41.580717   38829 type.go:168] "Request Body" body=""
	I1213 18:40:41.580821   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:41.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:42.080793   38829 type.go:168] "Request Body" body=""
	I1213 18:40:42.080889   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:42.081224   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:42.580774   38829 type.go:168] "Request Body" body=""
	I1213 18:40:42.580846   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:42.581129   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:42.581171   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:43.080817   38829 type.go:168] "Request Body" body=""
	I1213 18:40:43.080889   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:43.081182   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:43.580912   38829 type.go:168] "Request Body" body=""
	I1213 18:40:43.581022   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:43.581350   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:44.081100   38829 type.go:168] "Request Body" body=""
	I1213 18:40:44.081184   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:44.081466   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:44.581295   38829 type.go:168] "Request Body" body=""
	I1213 18:40:44.581368   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:44.581680   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:44.581735   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:45.081574   38829 type.go:168] "Request Body" body=""
	I1213 18:40:45.081671   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:45.082057   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:45.580753   38829 type.go:168] "Request Body" body=""
	I1213 18:40:45.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:45.581123   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:46.080724   38829 type.go:168] "Request Body" body=""
	I1213 18:40:46.080807   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:46.081173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:46.580875   38829 type.go:168] "Request Body" body=""
	I1213 18:40:46.580954   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:46.581347   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:47.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:40:47.080843   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:47.081169   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:47.081222   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:47.580721   38829 type.go:168] "Request Body" body=""
	I1213 18:40:47.580803   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:47.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:48.080733   38829 type.go:168] "Request Body" body=""
	I1213 18:40:48.080812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:48.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:48.581574   38829 type.go:168] "Request Body" body=""
	I1213 18:40:48.581646   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:48.581923   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:49.080895   38829 type.go:168] "Request Body" body=""
	I1213 18:40:49.080969   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:49.081284   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:49.081332   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:49.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:40:49.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:49.581189   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:50.080877   38829 type.go:168] "Request Body" body=""
	I1213 18:40:50.080951   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:50.081313   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:50.580740   38829 type.go:168] "Request Body" body=""
	I1213 18:40:50.580817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:50.581173   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:51.080726   38829 type.go:168] "Request Body" body=""
	I1213 18:40:51.080811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:51.081140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:51.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:51.580735   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:51.581094   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:51.581147   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:52.080738   38829 type.go:168] "Request Body" body=""
	I1213 18:40:52.080814   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:52.081156   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:52.580707   38829 type.go:168] "Request Body" body=""
	I1213 18:40:52.580781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:52.581124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:53.080661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:53.080737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:53.081101   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:53.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:40:53.580737   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:53.581073   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:54.081075   38829 type.go:168] "Request Body" body=""
	I1213 18:40:54.081153   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:54.081490   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:54.081544   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:54.580688   38829 type.go:168] "Request Body" body=""
	I1213 18:40:54.580770   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:54.581090   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:55.080755   38829 type.go:168] "Request Body" body=""
	I1213 18:40:55.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:55.081218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:55.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:40:55.580806   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:55.581128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:56.080828   38829 type.go:168] "Request Body" body=""
	I1213 18:40:56.080907   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:56.081254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:56.580945   38829 type.go:168] "Request Body" body=""
	I1213 18:40:56.581061   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:56.581383   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:56.581438   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:57.081145   38829 type.go:168] "Request Body" body=""
	I1213 18:40:57.081219   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:57.081499   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:57.581369   38829 type.go:168] "Request Body" body=""
	I1213 18:40:57.581461   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:57.581753   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:58.081564   38829 type.go:168] "Request Body" body=""
	I1213 18:40:58.081635   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:58.081964   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:58.580734   38829 type.go:168] "Request Body" body=""
	I1213 18:40:58.580811   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:58.581151   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:40:59.081182   38829 type.go:168] "Request Body" body=""
	I1213 18:40:59.081258   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:59.081514   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:40:59.081555   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:40:59.581349   38829 type.go:168] "Request Body" body=""
	I1213 18:40:59.581423   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:40:59.581720   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:00.081815   38829 type.go:168] "Request Body" body=""
	I1213 18:41:00.081903   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:00.082221   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:00.581646   38829 type.go:168] "Request Body" body=""
	I1213 18:41:00.581716   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:00.582021   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:01.080712   38829 type.go:168] "Request Body" body=""
	I1213 18:41:01.080792   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:01.081087   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:01.580731   38829 type.go:168] "Request Body" body=""
	I1213 18:41:01.580810   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:01.581320   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:01.581376   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:02.080810   38829 type.go:168] "Request Body" body=""
	I1213 18:41:02.080888   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:02.081180   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:02.580849   38829 type.go:168] "Request Body" body=""
	I1213 18:41:02.580920   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:02.581274   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:03.080853   38829 type.go:168] "Request Body" body=""
	I1213 18:41:03.080929   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:03.081297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:03.580687   38829 type.go:168] "Request Body" body=""
	I1213 18:41:03.580761   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:03.581113   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:04.080818   38829 type.go:168] "Request Body" body=""
	I1213 18:41:04.080891   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:04.081231   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:04.081279   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:04.580784   38829 type.go:168] "Request Body" body=""
	I1213 18:41:04.580861   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:04.581254   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:05.080702   38829 type.go:168] "Request Body" body=""
	I1213 18:41:05.080774   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:05.081067   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:05.580726   38829 type.go:168] "Request Body" body=""
	I1213 18:41:05.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:05.581149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:06.080754   38829 type.go:168] "Request Body" body=""
	I1213 18:41:06.080824   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:06.081183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:06.580809   38829 type.go:168] "Request Body" body=""
	I1213 18:41:06.580876   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:06.581193   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:06.581275   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:07.080748   38829 type.go:168] "Request Body" body=""
	I1213 18:41:07.080818   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:07.081155   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:07.580864   38829 type.go:168] "Request Body" body=""
	I1213 18:41:07.580935   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:07.581293   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:08.080815   38829 type.go:168] "Request Body" body=""
	I1213 18:41:08.080882   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:08.081228   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:08.581184   38829 type.go:168] "Request Body" body=""
	I1213 18:41:08.581267   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:08.581600   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:08.581650   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:09.081329   38829 type.go:168] "Request Body" body=""
	I1213 18:41:09.081400   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:09.081701   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:09.581386   38829 type.go:168] "Request Body" body=""
	I1213 18:41:09.581459   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:09.581736   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:10.081624   38829 type.go:168] "Request Body" body=""
	I1213 18:41:10.081709   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:10.082054   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:10.580758   38829 type.go:168] "Request Body" body=""
	I1213 18:41:10.580829   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:10.581165   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:11.080690   38829 type.go:168] "Request Body" body=""
	I1213 18:41:11.080767   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:11.081130   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:11.081225   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:11.580737   38829 type.go:168] "Request Body" body=""
	I1213 18:41:11.580838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:11.581297   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:12.080983   38829 type.go:168] "Request Body" body=""
	I1213 18:41:12.081129   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:12.081449   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:12.581247   38829 type.go:168] "Request Body" body=""
	I1213 18:41:12.581315   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:12.581576   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:13.080944   38829 type.go:168] "Request Body" body=""
	I1213 18:41:13.081031   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:13.081378   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:13.081435   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:13.580973   38829 type.go:168] "Request Body" body=""
	I1213 18:41:13.581116   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:13.581497   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:14.081648   38829 type.go:168] "Request Body" body=""
	I1213 18:41:14.081731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:14.082000   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:14.580709   38829 type.go:168] "Request Body" body=""
	I1213 18:41:14.580805   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:14.581161   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:15.080870   38829 type.go:168] "Request Body" body=""
	I1213 18:41:15.080947   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:15.081336   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:15.580661   38829 type.go:168] "Request Body" body=""
	I1213 18:41:15.580729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:15.581047   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:15.581086   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:16.080721   38829 type.go:168] "Request Body" body=""
	I1213 18:41:16.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:16.081148   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:16.580760   38829 type.go:168] "Request Body" body=""
	I1213 18:41:16.580840   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:16.581166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:17.080685   38829 type.go:168] "Request Body" body=""
	I1213 18:41:17.080772   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:17.081106   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:17.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:17.580795   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:17.581116   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:17.581162   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:18.080745   38829 type.go:168] "Request Body" body=""
	I1213 18:41:18.080820   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:18.081200   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:18.581224   38829 type.go:168] "Request Body" body=""
	I1213 18:41:18.581296   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:18.581580   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:19.081352   38829 type.go:168] "Request Body" body=""
	I1213 18:41:19.081427   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:19.081734   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:19.581454   38829 type.go:168] "Request Body" body=""
	I1213 18:41:19.581571   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:19.581908   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:19.581960   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:20.081575   38829 type.go:168] "Request Body" body=""
	I1213 18:41:20.081653   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:20.081930   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:20.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:41:20.580722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:20.581082   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:21.080807   38829 type.go:168] "Request Body" body=""
	I1213 18:41:21.080885   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:21.081222   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:21.580675   38829 type.go:168] "Request Body" body=""
	I1213 18:41:21.580755   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:21.581125   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:22.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:41:22.080789   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:22.081124   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:22.081174   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:22.580748   38829 type.go:168] "Request Body" body=""
	I1213 18:41:22.580823   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:22.581169   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:23.080686   38829 type.go:168] "Request Body" body=""
	I1213 18:41:23.080758   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:23.081067   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:23.580652   38829 type.go:168] "Request Body" body=""
	I1213 18:41:23.580733   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:23.581072   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:24.081615   38829 type.go:168] "Request Body" body=""
	I1213 18:41:24.081701   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:24.082028   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:24.082086   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:24.580715   38829 type.go:168] "Request Body" body=""
	I1213 18:41:24.580790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:24.581145   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:25.080723   38829 type.go:168] "Request Body" body=""
	I1213 18:41:25.080800   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:25.081135   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:25.580732   38829 type.go:168] "Request Body" body=""
	I1213 18:41:25.580804   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:25.581183   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:26.080778   38829 type.go:168] "Request Body" body=""
	I1213 18:41:26.080846   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:26.081178   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:26.580887   38829 type.go:168] "Request Body" body=""
	I1213 18:41:26.580963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:26.581315   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:26.581370   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:27.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:41:27.080786   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:27.081128   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:27.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:27.580741   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:27.581056   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:28.080772   38829 type.go:168] "Request Body" body=""
	I1213 18:41:28.080845   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:28.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:28.580902   38829 type.go:168] "Request Body" body=""
	I1213 18:41:28.580974   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:28.581301   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:29.080749   38829 type.go:168] "Request Body" body=""
	I1213 18:41:29.080817   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:29.081091   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:29.081132   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:29.580839   38829 type.go:168] "Request Body" body=""
	I1213 18:41:29.580981   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:29.581329   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:30.080766   38829 type.go:168] "Request Body" body=""
	I1213 18:41:30.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:30.081270   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:30.580990   38829 type.go:168] "Request Body" body=""
	I1213 18:41:30.581076   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:30.581343   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:31.080711   38829 type.go:168] "Request Body" body=""
	I1213 18:41:31.080787   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:31.081149   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:31.081200   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:31.580852   38829 type.go:168] "Request Body" body=""
	I1213 18:41:31.580935   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:31.581309   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:32.080976   38829 type.go:168] "Request Body" body=""
	I1213 18:41:32.081071   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:32.081376   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:32.580730   38829 type.go:168] "Request Body" body=""
	I1213 18:41:32.580812   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:32.581179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:33.080899   38829 type.go:168] "Request Body" body=""
	I1213 18:41:33.080979   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:33.081353   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:33.081413   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:33.580694   38829 type.go:168] "Request Body" body=""
	I1213 18:41:33.580774   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:33.581069   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:34.081613   38829 type.go:168] "Request Body" body=""
	I1213 18:41:34.081689   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:34.082033   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:34.580727   38829 type.go:168] "Request Body" body=""
	I1213 18:41:34.580828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:34.581146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:35.080790   38829 type.go:168] "Request Body" body=""
	I1213 18:41:35.080863   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:35.081157   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:35.580696   38829 type.go:168] "Request Body" body=""
	I1213 18:41:35.580790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:35.581078   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:35.581121   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:36.080756   38829 type.go:168] "Request Body" body=""
	I1213 18:41:36.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:36.081282   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:36.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:36.580739   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:36.581032   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:37.080757   38829 type.go:168] "Request Body" body=""
	I1213 18:41:37.080851   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:37.081179   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:37.580859   38829 type.go:168] "Request Body" body=""
	I1213 18:41:37.580931   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:37.581253   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:37.581299   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:38.080940   38829 type.go:168] "Request Body" body=""
	I1213 18:41:38.081033   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:38.081302   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:38.581248   38829 type.go:168] "Request Body" body=""
	I1213 18:41:38.581332   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:38.581671   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:39.081578   38829 type.go:168] "Request Body" body=""
	I1213 18:41:39.081659   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:39.081987   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:39.580653   38829 type.go:168] "Request Body" body=""
	I1213 18:41:39.580729   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:39.581076   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:40.080757   38829 type.go:168] "Request Body" body=""
	I1213 18:41:40.080841   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:40.081195   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:40.081257   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:40.580739   38829 type.go:168] "Request Body" body=""
	I1213 18:41:40.580813   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:40.581120   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:41.080675   38829 type.go:168] "Request Body" body=""
	I1213 18:41:41.080749   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:41.081085   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:41.580789   38829 type.go:168] "Request Body" body=""
	I1213 18:41:41.580862   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:41.581170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:42.080802   38829 type.go:168] "Request Body" body=""
	I1213 18:41:42.080877   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:42.081216   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:42.580919   38829 type.go:168] "Request Body" body=""
	I1213 18:41:42.580994   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:42.581286   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:42.581339   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:43.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:41:43.080833   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:43.081217   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:43.580933   38829 type.go:168] "Request Body" body=""
	I1213 18:41:43.581025   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:43.581344   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:44.081112   38829 type.go:168] "Request Body" body=""
	I1213 18:41:44.081178   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:44.081445   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:44.581279   38829 type.go:168] "Request Body" body=""
	I1213 18:41:44.581350   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:44.581653   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:44.581708   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:45.081520   38829 type.go:168] "Request Body" body=""
	I1213 18:41:45.081600   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:45.081937   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:45.580652   38829 type.go:168] "Request Body" body=""
	I1213 18:41:45.580731   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:45.581051   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:46.080751   38829 type.go:168] "Request Body" body=""
	I1213 18:41:46.080838   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:46.081265   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:46.580968   38829 type.go:168] "Request Body" body=""
	I1213 18:41:46.581065   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:46.581388   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:47.080619   38829 type.go:168] "Request Body" body=""
	I1213 18:41:47.080685   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:47.080942   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:47.080980   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:47.580668   38829 type.go:168] "Request Body" body=""
	I1213 18:41:47.580743   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:47.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:48.080761   38829 type.go:168] "Request Body" body=""
	I1213 18:41:48.080842   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:48.081166   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:48.581104   38829 type.go:168] "Request Body" body=""
	I1213 18:41:48.581172   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:48.581434   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:49.081502   38829 type.go:168] "Request Body" body=""
	I1213 18:41:49.081574   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:49.081903   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:49.081968   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:49.580639   38829 type.go:168] "Request Body" body=""
	I1213 18:41:49.580722   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:49.581089   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:50.080709   38829 type.go:168] "Request Body" body=""
	I1213 18:41:50.080785   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:50.081111   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:50.580720   38829 type.go:168] "Request Body" body=""
	I1213 18:41:50.580802   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:50.581143   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:51.080888   38829 type.go:168] "Request Body" body=""
	I1213 18:41:51.080963   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:51.081279   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:51.580674   38829 type.go:168] "Request Body" body=""
	I1213 18:41:51.580740   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:51.581077   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:51.581128   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:52.080773   38829 type.go:168] "Request Body" body=""
	I1213 18:41:52.080894   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:52.081249   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:52.580793   38829 type.go:168] "Request Body" body=""
	I1213 18:41:52.580867   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:52.581218   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:53.080706   38829 type.go:168] "Request Body" body=""
	I1213 18:41:53.080781   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:53.081080   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:53.580683   38829 type.go:168] "Request Body" body=""
	I1213 18:41:53.580763   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:53.581106   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:53.581159   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:54.080735   38829 type.go:168] "Request Body" body=""
	I1213 18:41:54.080815   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:54.081170   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:54.580662   38829 type.go:168] "Request Body" body=""
	I1213 18:41:54.580733   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:54.581088   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:55.080714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:55.080791   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:55.081154   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:55.580764   38829 type.go:168] "Request Body" body=""
	I1213 18:41:55.580837   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:55.581137   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:55.581182   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:56.080717   38829 type.go:168] "Request Body" body=""
	I1213 18:41:56.080790   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:56.081130   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:56.580729   38829 type.go:168] "Request Body" body=""
	I1213 18:41:56.580826   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:56.581140   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:57.080852   38829 type.go:168] "Request Body" body=""
	I1213 18:41:57.080924   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:57.081256   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:57.580921   38829 type.go:168] "Request Body" body=""
	I1213 18:41:57.581000   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:57.581269   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 18:41:57.581307   38829 node_ready.go:55] error getting node "functional-752103" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-752103": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 18:41:58.080750   38829 type.go:168] "Request Body" body=""
	I1213 18:41:58.080828   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:58.081201   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:58.580714   38829 type.go:168] "Request Body" body=""
	I1213 18:41:58.580799   38829 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-752103" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 18:41:58.581146   38829 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 18:41:59.081521   38829 type.go:168] "Request Body" body=""
	I1213 18:41:59.081580   38829 node_ready.go:38] duration metric: took 6m0.001077775s for node "functional-752103" to be "Ready" ...
	I1213 18:41:59.084666   38829 out.go:203] 
	W1213 18:41:59.087601   38829 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 18:41:59.087625   38829 out.go:285] * 
	W1213 18:41:59.089766   38829 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:41:59.092666   38829 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 18:42:08 functional-752103 crio[5390]: time="2025-12-13T18:42:08.12832746Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c755762e-8c02-4272-8897-bf6f4c3f3299 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.180193084Z" level=info msg="Checking image status: minikube-local-cache-test:functional-752103" id=d8c01cf5-8f87-4579-830a-467c9aa59a43 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.180374213Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.180418472Z" level=info msg="Image minikube-local-cache-test:functional-752103 not found" id=d8c01cf5-8f87-4579-830a-467c9aa59a43 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.180487863Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-752103 found" id=d8c01cf5-8f87-4579-830a-467c9aa59a43 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.204345068Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-752103" id=39116f54-1c25-4019-8682-c21aa17467f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.204482373Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-752103 not found" id=39116f54-1c25-4019-8682-c21aa17467f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.204523202Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-752103 found" id=39116f54-1c25-4019-8682-c21aa17467f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.231258723Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-752103" id=9b2633cd-0a2d-4c6e-bd18-f94a6181518d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.23139296Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-752103 not found" id=9b2633cd-0a2d-4c6e-bd18-f94a6181518d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:09 functional-752103 crio[5390]: time="2025-12-13T18:42:09.231456238Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-752103 found" id=9b2633cd-0a2d-4c6e-bd18-f94a6181518d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:10 functional-752103 crio[5390]: time="2025-12-13T18:42:10.200775437Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=52ab8407-ea44-4259-b544-9f05df9b2f6e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:10 functional-752103 crio[5390]: time="2025-12-13T18:42:10.539004038Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=5fab9b18-e673-445d-a76a-ddec399764c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:10 functional-752103 crio[5390]: time="2025-12-13T18:42:10.539143567Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=5fab9b18-e673-445d-a76a-ddec399764c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:10 functional-752103 crio[5390]: time="2025-12-13T18:42:10.539178865Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=5fab9b18-e673-445d-a76a-ddec399764c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.116424968Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=eb444054-ebd3-4c5e-b1a8-680cdcf483d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.116556234Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=eb444054-ebd3-4c5e-b1a8-680cdcf483d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.116596883Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=eb444054-ebd3-4c5e-b1a8-680cdcf483d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.160984811Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1f2eaebb-7394-4c89-9ded-c81c523ae3bc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.161193846Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=1f2eaebb-7394-4c89-9ded-c81c523ae3bc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.161231433Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=1f2eaebb-7394-4c89-9ded-c81c523ae3bc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.21238756Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=95d12dfb-44bb-43ef-9e17-70d511fc828f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.212543114Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=95d12dfb-44bb-43ef-9e17-70d511fc828f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.212590704Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=95d12dfb-44bb-43ef-9e17-70d511fc828f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:42:11 functional-752103 crio[5390]: time="2025-12-13T18:42:11.759905226Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=63662edb-6e0f-4d27-af3a-ceaeebbb2a50 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:42:15.780665    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:15.781463    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:15.783109    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:15.783722    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:42:15.785358    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	[Dec13 18:42] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 18:42:15 up  1:24,  0 user,  load average: 0.54, 0.35, 0.44
	Linux functional-752103 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 18:42:13 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:42:14 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1148.
	Dec 13 18:42:14 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:14 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:14 functional-752103 kubelet[9432]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:14 functional-752103 kubelet[9432]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:14 functional-752103 kubelet[9432]: E1213 18:42:14.160044    9432 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:42:14 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:42:14 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:42:14 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1149.
	Dec 13 18:42:14 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:14 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:14 functional-752103 kubelet[9467]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:14 functional-752103 kubelet[9467]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:14 functional-752103 kubelet[9467]: E1213 18:42:14.888090    9467 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:42:14 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:42:14 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:42:15 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1150.
	Dec 13 18:42:15 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:15 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:42:15 functional-752103 kubelet[9513]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:15 functional-752103 kubelet[9513]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:42:15 functional-752103 kubelet[9513]: E1213 18:42:15.641380    9513 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:42:15 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:42:15 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103: exit status 2 (325.952563ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-752103" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-752103 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 18:44:44.921422    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:46:42.464957    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:48:05.531867    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:49:44.921163    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:51:42.461175    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-752103 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.934167521s)

                                                
                                                
-- stdout --
	* [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-752103" primary control-plane node in "functional-752103" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000261471s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-752103 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.935336949s for "functional-752103" cluster.
I1213 18:54:29.718521    4637 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-752103
helpers_test.go:244: (dbg) docker inspect functional-752103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	        "Created": "2025-12-13T18:27:36.869398923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:27:36.933863328Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hosts",
	        "LogPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b-json.log",
	        "Name": "/functional-752103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-752103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-752103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	                "LowerDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-752103",
	                "Source": "/var/lib/docker/volumes/functional-752103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-752103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-752103",
	                "name.minikube.sigs.k8s.io": "functional-752103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625ea12887c8956887678f2408d6edd5b98f62bce458a6906f4f662a3001a53b",
	            "SandboxKey": "/var/run/docker/netns/625ea12887c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-752103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:2c:83:4a:30:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84df48e9f7dac8c6a1b67709e5eea216d99d3f16eb50b96c7f0e4a82b3193d56",
	                    "EndpointID": "e69b1f9610d40396647a2d78f0170c31b9cd8e641fc8465e742649cccee8e591",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-752103",
	                        "d72b547cdcc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103: exit status 2 (345.860679ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-752103 logs -n 25: (1.063493999s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-350101 image ls --format short --alsologtostderr                                                                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh     │ functional-350101 ssh pgrep buildkitd                                                                                                             │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	│ image   │ functional-350101 image ls --format json --alsologtostderr                                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image ls --format table --alsologtostderr                                                                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image build -t localhost/my-image:functional-350101 testdata/build --alsologtostderr                                            │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image ls                                                                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ delete  │ -p functional-350101                                                                                                                              │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ start   │ -p functional-752103 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	│ start   │ -p functional-752103 --alsologtostderr -v=8                                                                                                       │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:35 UTC │                     │
	│ cache   │ functional-752103 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache add registry.k8s.io/pause:latest                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache add minikube-local-cache-test:functional-752103                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache delete minikube-local-cache-test:functional-752103                                                                        │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl images                                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │                     │
	│ cache   │ functional-752103 cache reload                                                                                                                    │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ kubectl │ functional-752103 kubectl -- --context functional-752103 get pods                                                                                 │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │                     │
	│ start   │ -p functional-752103 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:42:16
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:42:16.832380   44722 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:42:16.832482   44722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:42:16.832486   44722 out.go:374] Setting ErrFile to fd 2...
	I1213 18:42:16.832490   44722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:42:16.832750   44722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:42:16.833154   44722 out.go:368] Setting JSON to false
	I1213 18:42:16.833990   44722 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5089,"bootTime":1765646248,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:42:16.834047   44722 start.go:143] virtualization:  
	I1213 18:42:16.838135   44722 out.go:179] * [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:42:16.841728   44722 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:42:16.841798   44722 notify.go:221] Checking for updates...
	I1213 18:42:16.848230   44722 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:42:16.851409   44722 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:42:16.854607   44722 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:42:16.857801   44722 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:42:16.860996   44722 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:42:16.864675   44722 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:42:16.864787   44722 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:42:16.894628   44722 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:42:16.894745   44722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:42:16.957351   44722 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 18:42:16.94760506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:42:16.957447   44722 docker.go:319] overlay module found
	I1213 18:42:16.960782   44722 out.go:179] * Using the docker driver based on existing profile
	I1213 18:42:16.963851   44722 start.go:309] selected driver: docker
	I1213 18:42:16.963862   44722 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:42:16.963972   44722 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:42:16.964069   44722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:42:17.021522   44722 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 18:42:17.012232642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:42:17.021951   44722 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 18:42:17.021974   44722 cni.go:84] Creating CNI manager for ""
	I1213 18:42:17.022024   44722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:42:17.022071   44722 start.go:353] cluster config:
	{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:42:17.025231   44722 out.go:179] * Starting "functional-752103" primary control-plane node in "functional-752103" cluster
	I1213 18:42:17.028293   44722 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:42:17.031266   44722 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:42:17.034129   44722 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:42:17.034163   44722 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 18:42:17.034171   44722 cache.go:65] Caching tarball of preloaded images
	I1213 18:42:17.034196   44722 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:42:17.034259   44722 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 18:42:17.034268   44722 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 18:42:17.034379   44722 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/config.json ...
	I1213 18:42:17.054759   44722 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 18:42:17.054770   44722 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 18:42:17.054784   44722 cache.go:243] Successfully downloaded all kic artifacts
	I1213 18:42:17.054813   44722 start.go:360] acquireMachinesLock for functional-752103: {Name:mkf4ec1d9e1836ef54983db4562aedfd1a9c51c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 18:42:17.054868   44722 start.go:364] duration metric: took 38.187µs to acquireMachinesLock for "functional-752103"
	I1213 18:42:17.054886   44722 start.go:96] Skipping create...Using existing machine configuration
	I1213 18:42:17.054891   44722 fix.go:54] fixHost starting: 
	I1213 18:42:17.055151   44722 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:42:17.071486   44722 fix.go:112] recreateIfNeeded on functional-752103: state=Running err=<nil>
	W1213 18:42:17.071504   44722 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 18:42:17.074803   44722 out.go:252] * Updating the running docker "functional-752103" container ...
	I1213 18:42:17.074833   44722 machine.go:94] provisionDockerMachine start ...
	I1213 18:42:17.074935   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.093274   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.093585   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.093591   44722 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 18:42:17.244524   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:42:17.244537   44722 ubuntu.go:182] provisioning hostname "functional-752103"
	I1213 18:42:17.244597   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.262380   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.262682   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.262690   44722 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-752103 && echo "functional-752103" | sudo tee /etc/hostname
	I1213 18:42:17.422688   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:42:17.422759   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.440827   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.441150   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.441163   44722 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-752103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-752103/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-752103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 18:42:17.593792   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 18:42:17.593821   44722 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 18:42:17.593841   44722 ubuntu.go:190] setting up certificates
	I1213 18:42:17.593861   44722 provision.go:84] configureAuth start
	I1213 18:42:17.593949   44722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:42:17.612231   44722 provision.go:143] copyHostCerts
	I1213 18:42:17.612297   44722 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 18:42:17.612304   44722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:42:17.612382   44722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 18:42:17.612525   44722 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 18:42:17.612528   44722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:42:17.612554   44722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 18:42:17.612619   44722 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 18:42:17.612622   44722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:42:17.612646   44722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 18:42:17.612700   44722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.functional-752103 san=[127.0.0.1 192.168.49.2 functional-752103 localhost minikube]
	I1213 18:42:17.675451   44722 provision.go:177] copyRemoteCerts
	I1213 18:42:17.675509   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 18:42:17.675551   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.693626   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:17.798419   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 18:42:17.816185   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 18:42:17.833700   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 18:42:17.853857   44722 provision.go:87] duration metric: took 259.975405ms to configureAuth
	I1213 18:42:17.853904   44722 ubuntu.go:206] setting minikube options for container-runtime
	I1213 18:42:17.854123   44722 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:42:17.854230   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.879965   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.880277   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.880288   44722 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 18:42:18.248633   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 18:42:18.248647   44722 machine.go:97] duration metric: took 1.173808025s to provisionDockerMachine
	I1213 18:42:18.248658   44722 start.go:293] postStartSetup for "functional-752103" (driver="docker")
	I1213 18:42:18.248669   44722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 18:42:18.248743   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 18:42:18.248792   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.266147   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.373221   44722 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 18:42:18.376713   44722 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 18:42:18.376729   44722 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 18:42:18.376740   44722 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 18:42:18.376791   44722 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 18:42:18.376867   44722 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 18:42:18.376940   44722 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> hosts in /etc/test/nested/copy/4637
	I1213 18:42:18.376981   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4637
	I1213 18:42:18.384622   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:42:18.402512   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts --> /etc/test/nested/copy/4637/hosts (40 bytes)
	I1213 18:42:18.419539   44722 start.go:296] duration metric: took 170.867557ms for postStartSetup
	I1213 18:42:18.419610   44722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 18:42:18.419664   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.436637   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.538189   44722 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 18:42:18.542827   44722 fix.go:56] duration metric: took 1.487930222s for fixHost
	I1213 18:42:18.542846   44722 start.go:83] releasing machines lock for "functional-752103", held for 1.487968187s
	I1213 18:42:18.542915   44722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:42:18.560389   44722 ssh_runner.go:195] Run: cat /version.json
	I1213 18:42:18.560434   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.560692   44722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 18:42:18.560748   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.583551   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.591018   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.701640   44722 ssh_runner.go:195] Run: systemctl --version
	I1213 18:42:18.800116   44722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 18:42:18.836359   44722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 18:42:18.840572   44722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 18:42:18.840646   44722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 18:42:18.848286   44722 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 18:42:18.848299   44722 start.go:496] detecting cgroup driver to use...
	I1213 18:42:18.848329   44722 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 18:42:18.848379   44722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 18:42:18.864054   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 18:42:18.878242   44722 docker.go:218] disabling cri-docker service (if available) ...
	I1213 18:42:18.878341   44722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 18:42:18.895499   44722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 18:42:18.910156   44722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 18:42:19.020039   44722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 18:42:19.142208   44722 docker.go:234] disabling docker service ...
	I1213 18:42:19.142263   44722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 18:42:19.158384   44722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 18:42:19.171631   44722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 18:42:19.293369   44722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 18:42:19.422037   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 18:42:19.435333   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 18:42:19.449327   44722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 18:42:19.449380   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.458689   44722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 18:42:19.458748   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.467502   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.476408   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.485815   44722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 18:42:19.494237   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.503335   44722 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.511920   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.520510   44722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 18:42:19.528006   44722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 18:42:19.535403   44722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:42:19.669317   44722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 18:42:19.868011   44722 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 18:42:19.868104   44722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 18:42:19.871850   44722 start.go:564] Will wait 60s for crictl version
	I1213 18:42:19.871906   44722 ssh_runner.go:195] Run: which crictl
	I1213 18:42:19.875387   44722 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 18:42:19.901618   44722 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 18:42:19.901703   44722 ssh_runner.go:195] Run: crio --version
	I1213 18:42:19.929436   44722 ssh_runner.go:195] Run: crio --version
	I1213 18:42:19.965392   44722 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 18:42:19.968348   44722 cli_runner.go:164] Run: docker network inspect functional-752103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:42:19.986389   44722 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 18:42:19.993243   44722 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 18:42:19.996095   44722 kubeadm.go:884] updating cluster {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 18:42:19.996213   44722 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:42:19.996291   44722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:42:20.057560   44722 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:42:20.057583   44722 crio.go:433] Images already preloaded, skipping extraction
	I1213 18:42:20.057640   44722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:42:20.089218   44722 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:42:20.089230   44722 cache_images.go:86] Images are preloaded, skipping loading
	I1213 18:42:20.089236   44722 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 18:42:20.089328   44722 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-752103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 18:42:20.089414   44722 ssh_runner.go:195] Run: crio config
	I1213 18:42:20.177167   44722 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 18:42:20.177187   44722 cni.go:84] Creating CNI manager for ""
	I1213 18:42:20.177196   44722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:42:20.177232   44722 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 18:42:20.177254   44722 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-752103 NodeName:functional-752103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 18:42:20.177418   44722 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-752103"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 18:42:20.177484   44722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 18:42:20.185578   44722 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 18:42:20.185638   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 18:42:20.192929   44722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 18:42:20.205146   44722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 18:42:20.217154   44722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1213 18:42:20.229717   44722 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 18:42:20.233247   44722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:42:20.353829   44722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:42:20.830403   44722 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103 for IP: 192.168.49.2
	I1213 18:42:20.830413   44722 certs.go:195] generating shared ca certs ...
	I1213 18:42:20.830433   44722 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:42:20.830617   44722 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 18:42:20.830683   44722 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 18:42:20.830690   44722 certs.go:257] generating profile certs ...
	I1213 18:42:20.830812   44722 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key
	I1213 18:42:20.830890   44722 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key.597c6026
	I1213 18:42:20.830949   44722 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key
	I1213 18:42:20.831080   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 18:42:20.831115   44722 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 18:42:20.831122   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 18:42:20.831151   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 18:42:20.831178   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 18:42:20.831204   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 18:42:20.831248   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:42:20.831981   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 18:42:20.856838   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 18:42:20.879274   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 18:42:20.903042   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 18:42:20.923306   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 18:42:20.942121   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 18:42:20.960173   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 18:42:20.977612   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 18:42:20.994747   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 18:42:21.015274   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 18:42:21.032852   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 18:42:21.049826   44722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 18:42:21.062502   44722 ssh_runner.go:195] Run: openssl version
	I1213 18:42:21.068589   44722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.075691   44722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 18:42:21.083152   44722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.086777   44722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.086838   44722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.127646   44722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 18:42:21.135282   44722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.142547   44722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 18:42:21.150436   44722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.154171   44722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.154226   44722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.195398   44722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 18:42:21.202918   44722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.210392   44722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 18:42:21.218018   44722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.221839   44722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.221907   44722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.262578   44722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 18:42:21.269897   44722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:42:21.273658   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 18:42:21.314538   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 18:42:21.355677   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 18:42:21.398275   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 18:42:21.439207   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 18:42:21.480256   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 18:42:21.526473   44722 kubeadm.go:401] StartCluster: {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:42:21.526551   44722 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:42:21.526617   44722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:42:21.557940   44722 cri.go:89] found id: ""
	I1213 18:42:21.558001   44722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 18:42:21.566021   44722 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 18:42:21.566031   44722 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 18:42:21.566081   44722 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 18:42:21.573603   44722 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.574106   44722 kubeconfig.go:125] found "functional-752103" server: "https://192.168.49.2:8441"
	I1213 18:42:21.575413   44722 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 18:42:21.585702   44722 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 18:27:45.810242505 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 18:42:20.222041116 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 18:42:21.585713   44722 kubeadm.go:1161] stopping kube-system containers ...
	I1213 18:42:21.585724   44722 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 18:42:21.585780   44722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:42:21.617768   44722 cri.go:89] found id: ""
	I1213 18:42:21.617827   44722 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 18:42:21.635403   44722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:42:21.643636   44722 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 18:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 18:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 18:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 13 18:31 /etc/kubernetes/scheduler.conf
	
	I1213 18:42:21.643708   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:42:21.651764   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:42:21.659161   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.659213   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:42:21.666555   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:42:21.674192   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.674247   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:42:21.681652   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:42:21.689753   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.689823   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:42:21.697372   44722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 18:42:21.705090   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:21.753330   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.314116   44722 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.560761972s)
	I1213 18:42:23.314176   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.523724   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.594421   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.642920   44722 api_server.go:52] waiting for apiserver process to appear ...
	I1213 18:42:23.642986   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:24.143977   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:24.643428   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:25.143550   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:25.643771   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:26.143193   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:26.643175   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:27.143974   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:27.643187   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:28.143912   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:28.643171   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:29.144072   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:29.644225   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:30.144075   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:30.643706   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:31.143172   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:31.643056   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:32.143628   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:32.643125   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:33.143827   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:33.643131   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:34.143247   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:34.643324   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:35.143141   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:35.643248   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:36.143915   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:36.644040   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:37.143715   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:37.643270   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:38.143997   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:38.643143   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:39.144023   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:39.643975   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:40.143050   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:40.643089   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:41.143722   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:41.643477   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:42.143838   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:42.643431   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:43.143175   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:43.643406   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:44.143895   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:44.643143   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:45.144217   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:45.644055   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:46.143137   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:46.644107   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:47.143996   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:47.643160   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:48.143815   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:48.643858   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:49.143166   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:49.644081   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:50.143765   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:50.643065   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:51.143582   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:51.643619   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:52.143220   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:52.643909   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:53.143832   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:53.643709   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:54.143426   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:54.643284   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:55.143992   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:55.643406   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:56.143943   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:56.643844   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:57.143618   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:57.643188   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:58.143857   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:58.643381   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:59.143183   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:59.643139   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:00.143730   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:00.643184   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:01.143789   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:01.643677   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:02.143883   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:02.643235   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:03.143175   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:03.643112   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:04.143893   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:04.643955   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:05.144057   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:05.643239   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:06.143229   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:06.643162   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:07.143132   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:07.643342   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:08.143161   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:08.643365   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:09.144023   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:09.643759   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:10.143925   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:10.644116   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:11.143184   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:11.643163   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:12.144081   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:12.643761   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:13.143171   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:13.643174   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:14.143070   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:14.643090   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:15.143762   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:15.643166   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:16.143069   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:16.644103   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:17.143993   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:17.643934   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:18.143216   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:18.643988   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:19.143982   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:19.643766   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:20.143191   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:20.644118   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:21.143094   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:21.644013   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:22.143973   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:22.643967   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:23.143991   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:23.643861   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:23.643960   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:23.674160   44722 cri.go:89] found id: ""
	I1213 18:43:23.674175   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.674182   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:23.674187   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:23.674245   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:23.700540   44722 cri.go:89] found id: ""
	I1213 18:43:23.700554   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.700561   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:23.700566   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:23.700624   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:23.726064   44722 cri.go:89] found id: ""
	I1213 18:43:23.726078   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.726084   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:23.726089   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:23.726148   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:23.752099   44722 cri.go:89] found id: ""
	I1213 18:43:23.752113   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.752120   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:23.752125   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:23.752190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:23.778105   44722 cri.go:89] found id: ""
	I1213 18:43:23.778120   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.778126   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:23.778131   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:23.778193   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:23.806032   44722 cri.go:89] found id: ""
	I1213 18:43:23.806047   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.806054   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:23.806059   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:23.806117   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:23.832635   44722 cri.go:89] found id: ""
	I1213 18:43:23.832649   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.832658   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:23.832667   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:23.832679   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:23.899244   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:23.899262   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:23.910777   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:23.910793   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:23.979546   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:23.970843   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.971479   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973158   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973794   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.975445   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:23.970843   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.971479   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973158   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973794   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.975445   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:23.979557   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:23.979567   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:24.055422   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:24.055441   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:26.587216   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:26.602744   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:26.602803   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:26.637528   44722 cri.go:89] found id: ""
	I1213 18:43:26.637543   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.637550   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:26.637555   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:26.637627   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:26.668738   44722 cri.go:89] found id: ""
	I1213 18:43:26.668752   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.668759   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:26.668764   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:26.668820   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:26.694813   44722 cri.go:89] found id: ""
	I1213 18:43:26.694827   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.694834   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:26.694839   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:26.694903   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:26.724152   44722 cri.go:89] found id: ""
	I1213 18:43:26.724165   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.724172   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:26.724177   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:26.724234   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:26.753666   44722 cri.go:89] found id: ""
	I1213 18:43:26.753680   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.753687   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:26.753692   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:26.753751   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:26.778797   44722 cri.go:89] found id: ""
	I1213 18:43:26.778810   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.778817   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:26.778822   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:26.778878   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:26.804095   44722 cri.go:89] found id: ""
	I1213 18:43:26.804108   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.804121   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:26.804128   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:26.804139   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:26.872610   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:26.863726   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.864249   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.865989   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.866485   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.868188   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:26.863726   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.864249   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.865989   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.866485   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.868188   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:26.872619   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:26.872629   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:26.941929   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:26.941948   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:26.969504   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:26.969520   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:27.036106   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:27.036126   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:29.549238   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:29.561563   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:29.561629   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:29.595212   44722 cri.go:89] found id: ""
	I1213 18:43:29.595227   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.595234   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:29.595239   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:29.595298   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:29.632368   44722 cri.go:89] found id: ""
	I1213 18:43:29.632382   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.632388   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:29.632393   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:29.632450   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:29.661185   44722 cri.go:89] found id: ""
	I1213 18:43:29.661199   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.661206   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:29.661211   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:29.661271   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:29.686961   44722 cri.go:89] found id: ""
	I1213 18:43:29.686974   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.686981   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:29.686986   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:29.687049   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:29.713104   44722 cri.go:89] found id: ""
	I1213 18:43:29.713118   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.713125   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:29.713130   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:29.713190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:29.738029   44722 cri.go:89] found id: ""
	I1213 18:43:29.738042   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.738049   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:29.738054   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:29.738116   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:29.763765   44722 cri.go:89] found id: ""
	I1213 18:43:29.763779   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.763785   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:29.763793   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:29.763803   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:29.829845   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:29.829864   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:29.841137   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:29.841153   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:29.910214   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:29.900921   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.902099   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903031   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903808   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.904683   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:29.900921   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.902099   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903031   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903808   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.904683   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:29.910238   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:29.910251   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:29.979995   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:29.980012   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:32.559824   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:32.569836   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:32.569896   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:32.598661   44722 cri.go:89] found id: ""
	I1213 18:43:32.598675   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.598682   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:32.598687   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:32.598741   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:32.629547   44722 cri.go:89] found id: ""
	I1213 18:43:32.629562   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.629568   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:32.629573   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:32.629650   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:32.654825   44722 cri.go:89] found id: ""
	I1213 18:43:32.654839   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.654846   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:32.654851   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:32.654908   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:32.680611   44722 cri.go:89] found id: ""
	I1213 18:43:32.680625   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.680632   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:32.680637   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:32.680695   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:32.706618   44722 cri.go:89] found id: ""
	I1213 18:43:32.706632   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.706639   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:32.706643   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:32.706702   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:32.730958   44722 cri.go:89] found id: ""
	I1213 18:43:32.730971   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.730978   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:32.730983   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:32.731052   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:32.759159   44722 cri.go:89] found id: ""
	I1213 18:43:32.759172   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.759179   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:32.759186   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:32.759196   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:32.824778   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:32.824797   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:32.835474   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:32.835491   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:32.898129   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:32.889603   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.890366   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.891862   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.892440   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.893974   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:32.889603   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.890366   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.891862   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.892440   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.893974   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:32.898149   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:32.898160   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:32.970010   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:32.970027   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:35.499162   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:35.510104   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:35.510168   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:35.536034   44722 cri.go:89] found id: ""
	I1213 18:43:35.536054   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.536061   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:35.536066   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:35.536125   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:35.560363   44722 cri.go:89] found id: ""
	I1213 18:43:35.560377   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.560384   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:35.560389   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:35.560447   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:35.595466   44722 cri.go:89] found id: ""
	I1213 18:43:35.595480   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.595486   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:35.595491   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:35.595546   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:35.626296   44722 cri.go:89] found id: ""
	I1213 18:43:35.626310   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.626316   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:35.626321   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:35.626376   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:35.653200   44722 cri.go:89] found id: ""
	I1213 18:43:35.653214   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.653221   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:35.653225   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:35.653322   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:35.678439   44722 cri.go:89] found id: ""
	I1213 18:43:35.678453   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.678459   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:35.678464   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:35.678525   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:35.703934   44722 cri.go:89] found id: ""
	I1213 18:43:35.703948   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.703954   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:35.703962   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:35.703972   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:35.769879   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:35.769897   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:35.781228   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:35.781245   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:35.848304   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:35.840026   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.840682   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842398   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842978   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.844548   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:35.840026   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.840682   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842398   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842978   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.844548   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:35.848316   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:35.848327   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:35.917611   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:35.917630   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:38.449407   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:38.459447   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:38.459504   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:38.485144   44722 cri.go:89] found id: ""
	I1213 18:43:38.485156   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.485163   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:38.485179   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:38.485241   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:38.513966   44722 cri.go:89] found id: ""
	I1213 18:43:38.513980   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.513987   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:38.513992   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:38.514050   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:38.540044   44722 cri.go:89] found id: ""
	I1213 18:43:38.540058   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.540065   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:38.540070   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:38.540128   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:38.570046   44722 cri.go:89] found id: ""
	I1213 18:43:38.570060   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.570067   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:38.570072   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:38.570131   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:38.602431   44722 cri.go:89] found id: ""
	I1213 18:43:38.602444   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.602451   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:38.602456   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:38.602513   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:38.631212   44722 cri.go:89] found id: ""
	I1213 18:43:38.631226   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.631233   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:38.631238   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:38.631295   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:38.658361   44722 cri.go:89] found id: ""
	I1213 18:43:38.658375   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.658383   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:38.658391   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:38.658401   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:38.728418   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:38.728436   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:38.739710   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:38.739726   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:38.807705   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:38.799135   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.799833   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.801634   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.802286   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.803965   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:38.799135   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.799833   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.801634   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.802286   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.803965   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:38.807715   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:38.807726   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:38.876773   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:38.876792   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:41.406031   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:41.416061   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:41.416122   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:41.441164   44722 cri.go:89] found id: ""
	I1213 18:43:41.441178   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.441184   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:41.441189   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:41.441246   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:41.468283   44722 cri.go:89] found id: ""
	I1213 18:43:41.468296   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.468303   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:41.468313   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:41.468369   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:41.492435   44722 cri.go:89] found id: ""
	I1213 18:43:41.492449   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.492456   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:41.492461   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:41.492525   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:41.517861   44722 cri.go:89] found id: ""
	I1213 18:43:41.517874   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.517881   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:41.517886   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:41.517946   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:41.542334   44722 cri.go:89] found id: ""
	I1213 18:43:41.542348   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.542354   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:41.542359   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:41.542420   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:41.566791   44722 cri.go:89] found id: ""
	I1213 18:43:41.566805   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.566812   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:41.566817   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:41.566873   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:41.605333   44722 cri.go:89] found id: ""
	I1213 18:43:41.605347   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.605353   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:41.605361   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:41.605372   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:41.685285   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:41.685307   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:41.719016   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:41.719031   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:41.784620   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:41.784638   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:41.797084   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:41.797099   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:41.863425   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:41.855920   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.856329   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.857901   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.858215   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.859646   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:41.855920   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.856329   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.857901   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.858215   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.859646   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:44.365147   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:44.375234   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:44.375292   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:44.404071   44722 cri.go:89] found id: ""
	I1213 18:43:44.404084   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.404091   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:44.404100   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:44.404159   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:44.429141   44722 cri.go:89] found id: ""
	I1213 18:43:44.429154   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.429161   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:44.429166   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:44.429235   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:44.453307   44722 cri.go:89] found id: ""
	I1213 18:43:44.453321   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.453328   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:44.453332   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:44.453409   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:44.478549   44722 cri.go:89] found id: ""
	I1213 18:43:44.478563   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.478570   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:44.478576   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:44.478636   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:44.504258   44722 cri.go:89] found id: ""
	I1213 18:43:44.504272   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.504278   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:44.504283   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:44.504340   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:44.528573   44722 cri.go:89] found id: ""
	I1213 18:43:44.528587   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.528594   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:44.528599   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:44.528655   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:44.553529   44722 cri.go:89] found id: ""
	I1213 18:43:44.553555   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.553562   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:44.553570   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:44.553581   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:44.591322   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:44.591339   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:44.676235   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:44.676264   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:44.687308   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:44.687333   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:44.749534   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:44.740808   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.741545   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743186   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743511   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.745093   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:44.740808   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.741545   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743186   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743511   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.745093   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:44.749567   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:44.749577   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:47.317951   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:47.328222   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:47.328296   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:47.357484   44722 cri.go:89] found id: ""
	I1213 18:43:47.357498   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.357515   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:47.357521   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:47.357593   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:47.388340   44722 cri.go:89] found id: ""
	I1213 18:43:47.388354   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.388362   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:47.388367   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:47.388431   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:47.412714   44722 cri.go:89] found id: ""
	I1213 18:43:47.412726   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.412733   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:47.412738   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:47.412794   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:47.437349   44722 cri.go:89] found id: ""
	I1213 18:43:47.437363   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.437369   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:47.437374   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:47.437432   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:47.461369   44722 cri.go:89] found id: ""
	I1213 18:43:47.461383   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.461390   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:47.461395   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:47.461454   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:47.494140   44722 cri.go:89] found id: ""
	I1213 18:43:47.494154   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.494161   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:47.494166   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:47.494223   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:47.519020   44722 cri.go:89] found id: ""
	I1213 18:43:47.519033   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.519040   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:47.519047   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:47.519060   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:47.587741   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:47.587760   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:47.623942   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:47.623957   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:47.696440   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:47.696459   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:47.707187   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:47.707203   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:47.769911   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:47.762074   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.762544   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764216   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764680   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.766131   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:47.762074   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.762544   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764216   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764680   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.766131   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:50.270188   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:50.280132   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:50.280190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:50.308672   44722 cri.go:89] found id: ""
	I1213 18:43:50.308686   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.308693   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:50.308699   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:50.308758   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:50.335996   44722 cri.go:89] found id: ""
	I1213 18:43:50.336010   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.336016   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:50.336021   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:50.336080   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:50.361733   44722 cri.go:89] found id: ""
	I1213 18:43:50.361746   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.361753   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:50.361758   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:50.361816   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:50.387122   44722 cri.go:89] found id: ""
	I1213 18:43:50.387137   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.387143   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:50.387148   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:50.387204   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:50.411746   44722 cri.go:89] found id: ""
	I1213 18:43:50.411760   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.411766   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:50.411771   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:50.411828   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:50.439079   44722 cri.go:89] found id: ""
	I1213 18:43:50.439093   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.439100   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:50.439104   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:50.439158   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:50.464264   44722 cri.go:89] found id: ""
	I1213 18:43:50.464278   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.464285   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:50.464293   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:50.464303   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:50.530938   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:50.530956   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:50.541880   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:50.541897   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:50.622277   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:50.613287   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.613702   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615208   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615836   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.616931   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:50.613287   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.613702   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615208   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615836   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.616931   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:50.622299   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:50.622311   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:50.693744   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:50.693765   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:53.224830   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:53.235168   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:53.235224   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:53.261284   44722 cri.go:89] found id: ""
	I1213 18:43:53.261297   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.261304   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:53.261309   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:53.261369   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:53.287104   44722 cri.go:89] found id: ""
	I1213 18:43:53.287118   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.287125   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:53.287136   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:53.287197   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:53.312612   44722 cri.go:89] found id: ""
	I1213 18:43:53.312626   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.312636   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:53.312641   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:53.312700   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:53.338548   44722 cri.go:89] found id: ""
	I1213 18:43:53.338562   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.338570   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:53.338575   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:53.338634   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:53.363849   44722 cri.go:89] found id: ""
	I1213 18:43:53.363862   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.363869   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:53.363874   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:53.363933   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:53.388677   44722 cri.go:89] found id: ""
	I1213 18:43:53.388693   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.388700   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:53.388707   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:53.388764   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:53.413384   44722 cri.go:89] found id: ""
	I1213 18:43:53.413398   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.413405   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:53.413412   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:53.413426   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:53.480895   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:53.480915   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:53.510174   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:53.510191   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:53.579252   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:53.579272   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:53.594356   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:53.594373   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:53.674807   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:53.667137   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.667568   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669097   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669497   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.670996   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:53.667137   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.667568   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669097   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669497   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.670996   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:56.175034   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:56.185031   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:56.185091   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:56.210252   44722 cri.go:89] found id: ""
	I1213 18:43:56.210266   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.210273   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:56.210289   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:56.210345   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:56.238190   44722 cri.go:89] found id: ""
	I1213 18:43:56.238204   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.238211   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:56.238216   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:56.238280   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:56.262334   44722 cri.go:89] found id: ""
	I1213 18:43:56.262361   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.262368   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:56.262374   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:56.262439   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:56.286668   44722 cri.go:89] found id: ""
	I1213 18:43:56.286681   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.286688   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:56.286693   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:56.286753   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:56.312401   44722 cri.go:89] found id: ""
	I1213 18:43:56.312426   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.312434   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:56.312439   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:56.312514   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:56.337419   44722 cri.go:89] found id: ""
	I1213 18:43:56.337433   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.337440   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:56.337446   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:56.337512   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:56.363240   44722 cri.go:89] found id: ""
	I1213 18:43:56.363252   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.363259   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:56.363274   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:56.363285   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:56.427558   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:56.427576   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:56.438948   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:56.438963   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:56.504100   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:56.496063   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.496558   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498109   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498537   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.500111   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:56.496063   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.496558   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498109   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498537   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.500111   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:56.504110   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:56.504121   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:56.576300   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:56.576319   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:59.120724   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:59.131483   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:59.131541   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:59.161664   44722 cri.go:89] found id: ""
	I1213 18:43:59.161677   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.161684   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:59.161689   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:59.161747   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:59.186541   44722 cri.go:89] found id: ""
	I1213 18:43:59.186554   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.186561   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:59.186566   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:59.186631   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:59.214613   44722 cri.go:89] found id: ""
	I1213 18:43:59.214627   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.214634   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:59.214639   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:59.214696   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:59.239790   44722 cri.go:89] found id: ""
	I1213 18:43:59.239803   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.239810   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:59.239815   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:59.239881   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:59.268177   44722 cri.go:89] found id: ""
	I1213 18:43:59.268191   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.268198   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:59.268203   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:59.268267   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:59.292660   44722 cri.go:89] found id: ""
	I1213 18:43:59.292674   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.292680   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:59.292687   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:59.292746   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:59.318413   44722 cri.go:89] found id: ""
	I1213 18:43:59.318428   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.318434   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:59.318442   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:59.318453   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:59.383565   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:59.383584   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:59.394753   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:59.394770   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:59.455757   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:59.448022   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.448571   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450046   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450376   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.451813   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:59.448022   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.448571   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450046   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450376   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.451813   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:59.455767   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:59.455777   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:59.527189   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:59.527209   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:02.063131   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:02.073460   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:02.073527   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:02.100600   44722 cri.go:89] found id: ""
	I1213 18:44:02.100614   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.100621   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:02.100626   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:02.100683   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:02.128484   44722 cri.go:89] found id: ""
	I1213 18:44:02.128498   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.128505   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:02.128510   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:02.128569   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:02.153979   44722 cri.go:89] found id: ""
	I1213 18:44:02.153994   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.154000   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:02.154005   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:02.154063   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:02.178950   44722 cri.go:89] found id: ""
	I1213 18:44:02.178964   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.178971   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:02.178975   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:02.179034   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:02.203560   44722 cri.go:89] found id: ""
	I1213 18:44:02.203573   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.203599   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:02.203604   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:02.203668   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:02.235040   44722 cri.go:89] found id: ""
	I1213 18:44:02.235054   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.235061   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:02.235066   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:02.235125   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:02.262563   44722 cri.go:89] found id: ""
	I1213 18:44:02.262578   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.262591   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:02.262598   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:02.262610   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:02.330429   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:02.330448   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:02.358932   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:02.358953   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:02.430089   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:02.430108   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:02.441162   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:02.441179   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:02.505804   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:02.496664   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.498082   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.499014   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500016   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500340   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:02.496664   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.498082   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.499014   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500016   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500340   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:05.006147   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:05.021965   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:05.022041   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:05.052122   44722 cri.go:89] found id: ""
	I1213 18:44:05.052138   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.052145   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:05.052152   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:05.052213   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:05.079304   44722 cri.go:89] found id: ""
	I1213 18:44:05.079318   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.079325   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:05.079330   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:05.079387   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:05.106489   44722 cri.go:89] found id: ""
	I1213 18:44:05.106502   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.106510   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:05.106515   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:05.106573   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:05.132104   44722 cri.go:89] found id: ""
	I1213 18:44:05.132118   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.132125   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:05.132130   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:05.132186   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:05.157774   44722 cri.go:89] found id: ""
	I1213 18:44:05.157789   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.157795   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:05.157800   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:05.157860   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:05.185228   44722 cri.go:89] found id: ""
	I1213 18:44:05.185241   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.185248   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:05.185254   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:05.185313   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:05.211945   44722 cri.go:89] found id: ""
	I1213 18:44:05.211959   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.211965   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:05.211973   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:05.211982   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:05.240000   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:05.240016   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:05.305313   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:05.305331   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:05.316614   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:05.316628   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:05.380462   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:05.372183   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.373062   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.374815   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.375112   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.376609   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:05.372183   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.373062   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.374815   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.375112   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.376609   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:05.380472   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:05.380482   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:07.948856   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:07.959788   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:07.959853   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:07.985640   44722 cri.go:89] found id: ""
	I1213 18:44:07.985655   44722 logs.go:282] 0 containers: []
	W1213 18:44:07.985662   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:07.985667   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:07.985735   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:08.017082   44722 cri.go:89] found id: ""
	I1213 18:44:08.017096   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.017105   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:08.017111   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:08.017176   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:08.046580   44722 cri.go:89] found id: ""
	I1213 18:44:08.046595   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.046603   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:08.046609   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:08.046678   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:08.073255   44722 cri.go:89] found id: ""
	I1213 18:44:08.073269   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.073275   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:08.073281   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:08.073342   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:08.101465   44722 cri.go:89] found id: ""
	I1213 18:44:08.101479   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.101486   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:08.101491   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:08.101560   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:08.126539   44722 cri.go:89] found id: ""
	I1213 18:44:08.126553   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.126559   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:08.126564   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:08.126624   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:08.151274   44722 cri.go:89] found id: ""
	I1213 18:44:08.151287   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.151294   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:08.151301   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:08.151311   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:08.221734   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:08.221760   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:08.234257   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:08.234274   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:08.303822   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:08.293709   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.294557   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.296695   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.297712   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.298655   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:08.293709   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.294557   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.296695   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.297712   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.298655   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:08.303834   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:08.303846   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:08.373320   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:08.373340   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:10.905140   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:10.916748   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:10.916820   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:10.944090   44722 cri.go:89] found id: ""
	I1213 18:44:10.944103   44722 logs.go:282] 0 containers: []
	W1213 18:44:10.944111   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:10.944115   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:10.944176   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:10.969154   44722 cri.go:89] found id: ""
	I1213 18:44:10.969168   44722 logs.go:282] 0 containers: []
	W1213 18:44:10.969174   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:10.969179   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:10.969237   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:10.994056   44722 cri.go:89] found id: ""
	I1213 18:44:10.994070   44722 logs.go:282] 0 containers: []
	W1213 18:44:10.994078   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:10.994082   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:10.994195   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:11.026335   44722 cri.go:89] found id: ""
	I1213 18:44:11.026349   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.026356   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:11.026362   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:11.026420   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:11.051618   44722 cri.go:89] found id: ""
	I1213 18:44:11.051632   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.051639   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:11.051644   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:11.051702   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:11.077796   44722 cri.go:89] found id: ""
	I1213 18:44:11.077811   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.077818   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:11.077824   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:11.077885   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:11.106061   44722 cri.go:89] found id: ""
	I1213 18:44:11.106082   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.106089   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:11.106096   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:11.106107   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:11.172632   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:11.164014   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.164956   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.166552   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.167108   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.168668   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:11.164014   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.164956   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.166552   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.167108   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.168668   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:11.172644   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:11.172654   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:11.241474   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:11.241492   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:11.270376   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:11.270394   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:11.335341   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:11.335360   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:13.846544   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:13.858154   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:13.858216   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:13.891714   44722 cri.go:89] found id: ""
	I1213 18:44:13.891728   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.891735   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:13.891740   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:13.891796   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:13.917089   44722 cri.go:89] found id: ""
	I1213 18:44:13.917103   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.917110   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:13.917115   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:13.917175   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:13.942618   44722 cri.go:89] found id: ""
	I1213 18:44:13.942637   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.942644   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:13.942654   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:13.942717   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:13.972824   44722 cri.go:89] found id: ""
	I1213 18:44:13.972837   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.972844   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:13.972850   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:13.972911   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:14.002454   44722 cri.go:89] found id: ""
	I1213 18:44:14.002478   44722 logs.go:282] 0 containers: []
	W1213 18:44:14.002507   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:14.002515   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:14.002584   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:14.029621   44722 cri.go:89] found id: ""
	I1213 18:44:14.029635   44722 logs.go:282] 0 containers: []
	W1213 18:44:14.029642   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:14.029647   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:14.029705   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:14.059348   44722 cri.go:89] found id: ""
	I1213 18:44:14.059361   44722 logs.go:282] 0 containers: []
	W1213 18:44:14.059368   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:14.059376   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:14.059386   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:14.089028   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:14.089044   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:14.154770   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:14.154787   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:14.165718   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:14.165733   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:14.229870   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:14.221572   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.222738   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.223785   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.224389   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.225986   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:14.221572   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.222738   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.223785   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.224389   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.225986   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:14.229881   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:14.229893   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:16.799799   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:16.810049   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:16.810109   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:16.841177   44722 cri.go:89] found id: ""
	I1213 18:44:16.841190   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.841197   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:16.841202   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:16.841258   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:16.867562   44722 cri.go:89] found id: ""
	I1213 18:44:16.867576   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.867583   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:16.867588   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:16.867647   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:16.894362   44722 cri.go:89] found id: ""
	I1213 18:44:16.894376   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.894383   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:16.894388   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:16.894449   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:16.922192   44722 cri.go:89] found id: ""
	I1213 18:44:16.922205   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.922212   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:16.922217   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:16.922274   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:16.947061   44722 cri.go:89] found id: ""
	I1213 18:44:16.947081   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.947088   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:16.947093   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:16.947151   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:16.973311   44722 cri.go:89] found id: ""
	I1213 18:44:16.973337   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.973345   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:16.973349   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:16.973409   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:17.002040   44722 cri.go:89] found id: ""
	I1213 18:44:17.002056   44722 logs.go:282] 0 containers: []
	W1213 18:44:17.002077   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:17.002086   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:17.002097   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:17.070995   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:17.062754   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.063352   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.064945   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.065473   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.066944   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:17.062754   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.063352   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.064945   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.065473   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.066944   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:17.071005   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:17.071015   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:17.142450   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:17.142467   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:17.174618   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:17.174636   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:17.245843   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:17.245861   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:19.758316   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:19.768061   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:19.768139   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:19.793023   44722 cri.go:89] found id: ""
	I1213 18:44:19.793037   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.793044   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:19.793049   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:19.793113   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:19.817629   44722 cri.go:89] found id: ""
	I1213 18:44:19.817643   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.817649   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:19.817654   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:19.817710   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:19.851145   44722 cri.go:89] found id: ""
	I1213 18:44:19.851159   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.851166   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:19.851170   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:19.851234   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:19.881252   44722 cri.go:89] found id: ""
	I1213 18:44:19.881265   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.881272   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:19.881277   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:19.881339   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:19.912741   44722 cri.go:89] found id: ""
	I1213 18:44:19.912754   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.912761   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:19.912766   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:19.912823   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:19.940085   44722 cri.go:89] found id: ""
	I1213 18:44:19.940098   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.940105   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:19.940110   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:19.940168   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:19.967047   44722 cri.go:89] found id: ""
	I1213 18:44:19.967061   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.967067   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:19.967081   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:19.967092   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:20.039016   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:20.039038   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:20.052809   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:20.052826   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:20.124568   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:20.115906   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.116315   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118019   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118655   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.120394   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:20.115906   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.116315   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118019   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118655   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.120394   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:20.124579   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:20.124595   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:20.192989   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:20.193017   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:22.722315   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:22.732622   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:22.732684   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:22.757530   44722 cri.go:89] found id: ""
	I1213 18:44:22.757544   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.757551   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:22.757556   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:22.757614   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:22.783868   44722 cri.go:89] found id: ""
	I1213 18:44:22.783891   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.783899   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:22.783906   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:22.783973   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:22.809581   44722 cri.go:89] found id: ""
	I1213 18:44:22.809602   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.809610   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:22.809615   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:22.809676   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:22.844651   44722 cri.go:89] found id: ""
	I1213 18:44:22.844665   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.844672   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:22.844677   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:22.844734   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:22.878207   44722 cri.go:89] found id: ""
	I1213 18:44:22.878221   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.878228   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:22.878233   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:22.878291   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:22.909295   44722 cri.go:89] found id: ""
	I1213 18:44:22.909309   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.909316   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:22.909322   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:22.909382   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:22.936178   44722 cri.go:89] found id: ""
	I1213 18:44:22.936191   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.936207   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:22.936215   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:22.936225   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:23.005296   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:22.992378   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.993185   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.994804   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.995396   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.997070   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:22.992378   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.993185   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.994804   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.995396   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.997070   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:23.005308   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:23.005319   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:23.079778   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:23.079797   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:23.109955   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:23.109982   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:23.176235   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:23.176252   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:25.689578   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:25.699921   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:25.699979   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:25.723877   44722 cri.go:89] found id: ""
	I1213 18:44:25.723891   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.723898   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:25.723902   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:25.723959   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:25.749128   44722 cri.go:89] found id: ""
	I1213 18:44:25.749142   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.749148   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:25.749153   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:25.749209   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:25.773791   44722 cri.go:89] found id: ""
	I1213 18:44:25.773811   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.773818   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:25.773823   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:25.773881   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:25.799904   44722 cri.go:89] found id: ""
	I1213 18:44:25.799917   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.799924   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:25.799929   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:25.799988   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:25.825978   44722 cri.go:89] found id: ""
	I1213 18:44:25.825992   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.825999   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:25.826004   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:25.826061   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:25.861824   44722 cri.go:89] found id: ""
	I1213 18:44:25.861838   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.861854   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:25.861860   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:25.861917   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:25.899196   44722 cri.go:89] found id: ""
	I1213 18:44:25.899209   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.899227   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:25.899235   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:25.899245   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:25.962230   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:25.953208   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.953997   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.955726   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.956332   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.957845   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:25.953208   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.953997   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.955726   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.956332   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.957845   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:25.962249   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:25.962260   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:26.029250   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:26.029269   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:26.058026   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:26.058045   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:26.126957   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:26.126975   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:28.638630   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:28.649197   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:28.649261   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:28.678140   44722 cri.go:89] found id: ""
	I1213 18:44:28.678155   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.678162   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:28.678166   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:28.678225   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:28.704240   44722 cri.go:89] found id: ""
	I1213 18:44:28.704253   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.704266   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:28.704271   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:28.704332   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:28.729471   44722 cri.go:89] found id: ""
	I1213 18:44:28.729484   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.729492   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:28.729499   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:28.729560   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:28.755384   44722 cri.go:89] found id: ""
	I1213 18:44:28.755397   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.755404   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:28.755419   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:28.755527   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:28.780729   44722 cri.go:89] found id: ""
	I1213 18:44:28.780742   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.780749   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:28.780754   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:28.780819   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:28.807414   44722 cri.go:89] found id: ""
	I1213 18:44:28.807428   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.807434   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:28.807439   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:28.807495   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:28.834478   44722 cri.go:89] found id: ""
	I1213 18:44:28.834492   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.834501   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:28.834509   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:28.834519   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:28.928552   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:28.919277   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.920155   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.921759   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.922310   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.923982   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:28.919277   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.920155   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.921759   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.922310   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.923982   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:28.928563   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:28.928572   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:28.998427   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:28.998448   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:29.028696   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:29.028713   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:29.094175   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:29.094194   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:31.605517   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:31.616232   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:31.616297   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:31.642711   44722 cri.go:89] found id: ""
	I1213 18:44:31.642725   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.642733   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:31.642738   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:31.642796   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:31.669186   44722 cri.go:89] found id: ""
	I1213 18:44:31.669201   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.669208   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:31.669212   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:31.669271   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:31.696754   44722 cri.go:89] found id: ""
	I1213 18:44:31.696768   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.696775   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:31.696780   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:31.696840   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:31.722602   44722 cri.go:89] found id: ""
	I1213 18:44:31.722616   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.722623   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:31.722628   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:31.722687   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:31.749280   44722 cri.go:89] found id: ""
	I1213 18:44:31.749294   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.749302   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:31.749307   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:31.749386   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:31.774452   44722 cri.go:89] found id: ""
	I1213 18:44:31.774466   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.774473   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:31.774478   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:31.774536   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:31.804250   44722 cri.go:89] found id: ""
	I1213 18:44:31.804264   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.804271   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:31.804278   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:31.804288   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:31.876057   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:31.876075   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:31.887830   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:31.887845   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:31.956181   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:31.947856   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.948537   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950179   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950675   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.952236   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:31.947856   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.948537   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950179   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950675   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.952236   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:31.956191   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:31.956202   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:32.025697   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:32.025716   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:34.558938   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:34.569025   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:34.569094   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:34.598446   44722 cri.go:89] found id: ""
	I1213 18:44:34.598459   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.598466   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:34.598470   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:34.598537   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:34.624087   44722 cri.go:89] found id: ""
	I1213 18:44:34.624105   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.624132   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:34.624137   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:34.624204   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:34.649175   44722 cri.go:89] found id: ""
	I1213 18:44:34.649189   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.649196   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:34.649201   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:34.649257   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:34.679802   44722 cri.go:89] found id: ""
	I1213 18:44:34.679816   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.679823   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:34.679828   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:34.679886   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:34.706842   44722 cri.go:89] found id: ""
	I1213 18:44:34.706856   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.706863   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:34.706868   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:34.706928   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:34.732851   44722 cri.go:89] found id: ""
	I1213 18:44:34.732878   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.732885   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:34.732906   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:34.732972   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:34.758491   44722 cri.go:89] found id: ""
	I1213 18:44:34.758504   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.758511   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:34.758520   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:34.758530   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:34.831184   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:34.831212   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:34.854446   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:34.854463   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:34.939932   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:34.930787   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.931550   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.933427   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.934090   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.935671   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:34.930787   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.931550   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.933427   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.934090   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.935671   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:34.939943   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:34.939953   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:35.008351   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:35.008373   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:37.538092   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:37.548372   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:37.548433   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:37.576028   44722 cri.go:89] found id: ""
	I1213 18:44:37.576042   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.576049   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:37.576054   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:37.576116   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:37.601240   44722 cri.go:89] found id: ""
	I1213 18:44:37.601264   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.601272   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:37.601277   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:37.601354   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:37.629739   44722 cri.go:89] found id: ""
	I1213 18:44:37.629752   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.629759   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:37.629764   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:37.629821   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:37.659547   44722 cri.go:89] found id: ""
	I1213 18:44:37.659560   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.659567   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:37.659582   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:37.659639   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:37.687820   44722 cri.go:89] found id: ""
	I1213 18:44:37.687833   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.687841   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:37.687846   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:37.687913   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:37.713950   44722 cri.go:89] found id: ""
	I1213 18:44:37.713964   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.713971   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:37.713976   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:37.714035   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:37.739532   44722 cri.go:89] found id: ""
	I1213 18:44:37.739557   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.739564   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:37.739572   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:37.739588   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:37.769815   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:37.769831   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:37.842765   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:37.842782   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:37.856389   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:37.856405   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:37.939080   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:37.930901   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.931464   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933144   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933671   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.935120   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:37.930901   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.931464   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933144   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933671   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.935120   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:37.939091   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:37.939101   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:40.510055   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:40.520003   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:40.520078   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:40.546166   44722 cri.go:89] found id: ""
	I1213 18:44:40.546181   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.546187   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:40.546193   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:40.546255   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:40.575492   44722 cri.go:89] found id: ""
	I1213 18:44:40.575506   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.575512   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:40.575517   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:40.575572   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:40.604021   44722 cri.go:89] found id: ""
	I1213 18:44:40.604034   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.604042   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:40.604047   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:40.604103   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:40.634511   44722 cri.go:89] found id: ""
	I1213 18:44:40.634525   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.634533   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:40.634537   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:40.634597   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:40.659233   44722 cri.go:89] found id: ""
	I1213 18:44:40.659255   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.659263   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:40.659268   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:40.659327   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:40.684289   44722 cri.go:89] found id: ""
	I1213 18:44:40.684314   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.684321   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:40.684326   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:40.684401   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:40.716236   44722 cri.go:89] found id: ""
	I1213 18:44:40.716250   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.716258   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:40.716265   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:40.716277   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:40.743946   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:40.743962   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:40.809441   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:40.809459   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:40.820434   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:40.820458   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:40.906406   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:40.898049   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.898672   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900282   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900803   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.902445   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:40.898049   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.898672   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900282   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900803   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.902445   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:40.906416   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:40.906426   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:43.474264   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:43.484255   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:43.484319   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:43.511963   44722 cri.go:89] found id: ""
	I1213 18:44:43.511977   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.511984   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:43.511989   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:43.512049   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:43.537311   44722 cri.go:89] found id: ""
	I1213 18:44:43.537332   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.537339   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:43.537343   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:43.537433   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:43.564197   44722 cri.go:89] found id: ""
	I1213 18:44:43.564211   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.564218   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:43.564222   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:43.564278   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:43.590140   44722 cri.go:89] found id: ""
	I1213 18:44:43.590154   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.590160   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:43.590166   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:43.590226   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:43.615885   44722 cri.go:89] found id: ""
	I1213 18:44:43.615900   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.615916   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:43.615921   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:43.615987   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:43.640848   44722 cri.go:89] found id: ""
	I1213 18:44:43.640862   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.640868   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:43.640873   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:43.640931   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:43.665363   44722 cri.go:89] found id: ""
	I1213 18:44:43.665377   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.665384   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:43.665391   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:43.665403   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:43.676205   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:43.676227   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:43.739640   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:43.731228   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.732007   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.733627   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.734165   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.735773   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:43.731228   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.732007   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.733627   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.734165   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.735773   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:43.739650   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:43.739661   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:43.807987   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:43.808008   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:43.851586   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:43.851601   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:46.426151   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:46.436240   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:46.436307   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:46.469030   44722 cri.go:89] found id: ""
	I1213 18:44:46.469044   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.469051   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:46.469056   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:46.469115   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:46.494555   44722 cri.go:89] found id: ""
	I1213 18:44:46.494568   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.494575   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:46.494580   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:46.494638   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:46.519291   44722 cri.go:89] found id: ""
	I1213 18:44:46.519305   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.519312   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:46.519316   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:46.519371   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:46.547775   44722 cri.go:89] found id: ""
	I1213 18:44:46.547790   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.547797   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:46.547802   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:46.547860   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:46.572951   44722 cri.go:89] found id: ""
	I1213 18:44:46.572965   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.572972   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:46.572978   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:46.573096   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:46.598953   44722 cri.go:89] found id: ""
	I1213 18:44:46.598967   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.598973   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:46.598979   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:46.599036   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:46.624426   44722 cri.go:89] found id: ""
	I1213 18:44:46.624440   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.624447   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:46.624454   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:46.624465   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:46.656272   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:46.656289   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:46.720505   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:46.720523   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:46.731422   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:46.731438   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:46.794954   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:46.786465   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.786956   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.788689   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.789067   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.790678   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:46.786465   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.786956   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.788689   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.789067   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.790678   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:46.794964   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:46.794974   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:49.368713   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:49.379093   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:49.379150   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:49.404638   44722 cri.go:89] found id: ""
	I1213 18:44:49.404652   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.404670   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:49.404676   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:49.404743   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:49.432165   44722 cri.go:89] found id: ""
	I1213 18:44:49.432185   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.432192   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:49.432203   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:49.432274   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:49.457580   44722 cri.go:89] found id: ""
	I1213 18:44:49.457594   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.457601   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:49.457605   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:49.457661   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:49.482518   44722 cri.go:89] found id: ""
	I1213 18:44:49.482531   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.482539   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:49.482544   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:49.482604   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:49.508421   44722 cri.go:89] found id: ""
	I1213 18:44:49.508435   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.508442   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:49.508447   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:49.508505   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:49.533273   44722 cri.go:89] found id: ""
	I1213 18:44:49.533286   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.533293   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:49.533298   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:49.533363   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:49.559407   44722 cri.go:89] found id: ""
	I1213 18:44:49.559421   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.559428   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:49.559436   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:49.559447   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:49.586863   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:49.586880   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:49.655301   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:49.655318   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:49.666641   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:49.666657   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:49.731547   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:49.723390   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.723925   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.725596   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.726135   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.727809   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:49.723390   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.723925   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.725596   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.726135   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.727809   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:49.731558   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:49.731569   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:52.302228   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:52.312354   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:52.312414   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:52.339337   44722 cri.go:89] found id: ""
	I1213 18:44:52.339351   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.339358   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:52.339363   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:52.339428   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:52.364722   44722 cri.go:89] found id: ""
	I1213 18:44:52.364736   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.364744   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:52.364748   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:52.364807   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:52.392869   44722 cri.go:89] found id: ""
	I1213 18:44:52.392883   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.392889   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:52.392894   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:52.392952   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:52.420101   44722 cri.go:89] found id: ""
	I1213 18:44:52.420115   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.420122   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:52.420126   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:52.420186   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:52.444708   44722 cri.go:89] found id: ""
	I1213 18:44:52.444721   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.444728   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:52.444733   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:52.444789   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:52.470027   44722 cri.go:89] found id: ""
	I1213 18:44:52.470041   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.470048   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:52.470053   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:52.470112   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:52.494761   44722 cri.go:89] found id: ""
	I1213 18:44:52.494775   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.494782   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:52.494789   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:52.494799   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:52.563435   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:52.563455   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:52.597529   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:52.597545   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:52.667889   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:52.667909   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:52.679020   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:52.679036   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:52.744141   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:52.735527   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.736263   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738012   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738630   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.740366   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:52.735527   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.736263   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738012   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738630   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.740366   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:55.245804   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:55.256306   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:55.256370   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:55.283000   44722 cri.go:89] found id: ""
	I1213 18:44:55.283013   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.283020   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:55.283025   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:55.283082   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:55.313671   44722 cri.go:89] found id: ""
	I1213 18:44:55.313684   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.313690   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:55.313695   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:55.313755   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:55.342037   44722 cri.go:89] found id: ""
	I1213 18:44:55.342051   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.342059   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:55.342064   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:55.342127   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:55.367525   44722 cri.go:89] found id: ""
	I1213 18:44:55.367538   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.367557   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:55.367562   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:55.367628   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:55.393243   44722 cri.go:89] found id: ""
	I1213 18:44:55.393257   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.393274   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:55.393280   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:55.393353   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:55.418513   44722 cri.go:89] found id: ""
	I1213 18:44:55.418527   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.418534   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:55.418539   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:55.418607   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:55.443468   44722 cri.go:89] found id: ""
	I1213 18:44:55.443483   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.443490   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:55.443500   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:55.443511   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:55.515427   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:55.507029   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.507943   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.509657   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.510148   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.511618   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:55.507029   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.507943   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.509657   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.510148   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.511618   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:55.515437   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:55.515448   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:55.586865   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:55.586885   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:55.616109   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:55.616125   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:55.685952   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:55.685972   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:58.198520   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:58.208638   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:58.208696   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:58.234480   44722 cri.go:89] found id: ""
	I1213 18:44:58.234494   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.234501   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:58.234506   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:58.234561   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:58.258261   44722 cri.go:89] found id: ""
	I1213 18:44:58.258274   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.258281   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:58.258287   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:58.258358   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:58.282891   44722 cri.go:89] found id: ""
	I1213 18:44:58.282904   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.282911   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:58.282916   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:58.282971   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:58.315746   44722 cri.go:89] found id: ""
	I1213 18:44:58.315760   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.315766   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:58.315771   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:58.315830   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:58.340701   44722 cri.go:89] found id: ""
	I1213 18:44:58.340714   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.340721   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:58.340726   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:58.340792   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:58.369974   44722 cri.go:89] found id: ""
	I1213 18:44:58.369987   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.369994   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:58.369998   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:58.370063   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:58.398903   44722 cri.go:89] found id: ""
	I1213 18:44:58.398917   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.398924   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:58.398932   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:58.398945   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:58.468133   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:58.468153   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:58.495769   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:58.495787   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:58.562032   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:58.562052   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:58.573192   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:58.573208   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:58.639058   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:58.631176   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.631711   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633329   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633843   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.635281   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:58.631176   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.631711   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633329   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633843   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.635281   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:01.139326   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:01.150701   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:01.150773   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:01.180572   44722 cri.go:89] found id: ""
	I1213 18:45:01.180597   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.180627   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:01.180632   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:01.180723   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:01.210001   44722 cri.go:89] found id: ""
	I1213 18:45:01.210027   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.210035   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:01.210040   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:01.210144   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:01.240388   44722 cri.go:89] found id: ""
	I1213 18:45:01.240411   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.240419   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:01.240425   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:01.240500   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:01.270469   44722 cri.go:89] found id: ""
	I1213 18:45:01.270485   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.270492   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:01.270498   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:01.270560   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:01.298917   44722 cri.go:89] found id: ""
	I1213 18:45:01.298932   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.298950   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:01.298956   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:01.299047   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:01.326174   44722 cri.go:89] found id: ""
	I1213 18:45:01.326188   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.326195   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:01.326200   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:01.326260   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:01.355316   44722 cri.go:89] found id: ""
	I1213 18:45:01.355331   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.355339   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:01.355348   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:01.355360   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:01.431176   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:01.431206   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:01.443676   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:01.443695   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:01.512045   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:01.503556   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.504288   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506017   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506375   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.508015   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:01.503556   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.504288   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506017   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506375   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.508015   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:01.512056   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:01.512066   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:01.581540   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:01.581560   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:04.113152   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:04.126133   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:04.126190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:04.157022   44722 cri.go:89] found id: ""
	I1213 18:45:04.157037   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.157044   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:04.157050   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:04.157111   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:04.184060   44722 cri.go:89] found id: ""
	I1213 18:45:04.184073   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.184080   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:04.184085   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:04.184144   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:04.210310   44722 cri.go:89] found id: ""
	I1213 18:45:04.210323   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.210330   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:04.210336   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:04.210398   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:04.236685   44722 cri.go:89] found id: ""
	I1213 18:45:04.236700   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.236707   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:04.236712   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:04.236771   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:04.265948   44722 cri.go:89] found id: ""
	I1213 18:45:04.265961   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.265968   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:04.265973   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:04.266029   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:04.291029   44722 cri.go:89] found id: ""
	I1213 18:45:04.291042   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.291049   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:04.291065   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:04.291122   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:04.316748   44722 cri.go:89] found id: ""
	I1213 18:45:04.316762   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.316768   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:04.316787   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:04.316798   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:04.380978   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:04.380996   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:04.392325   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:04.392342   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:04.459627   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:04.451449   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.452151   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.453706   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.454141   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.455629   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:04.451449   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.452151   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.453706   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.454141   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.455629   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:04.459637   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:04.459648   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:04.527567   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:04.527587   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:07.060097   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:07.070755   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:07.070814   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:07.098777   44722 cri.go:89] found id: ""
	I1213 18:45:07.098790   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.098797   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:07.098802   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:07.098863   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:07.126857   44722 cri.go:89] found id: ""
	I1213 18:45:07.126870   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.126877   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:07.126882   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:07.126938   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:07.154665   44722 cri.go:89] found id: ""
	I1213 18:45:07.154679   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.154686   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:07.154691   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:07.154751   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:07.183998   44722 cri.go:89] found id: ""
	I1213 18:45:07.184011   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.184018   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:07.184023   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:07.184079   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:07.209217   44722 cri.go:89] found id: ""
	I1213 18:45:07.209230   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.209238   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:07.209249   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:07.209309   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:07.238297   44722 cri.go:89] found id: ""
	I1213 18:45:07.238321   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.238328   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:07.238333   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:07.238392   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:07.268115   44722 cri.go:89] found id: ""
	I1213 18:45:07.268130   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.268136   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:07.268144   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:07.268156   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:07.337456   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:07.337475   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:07.365283   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:07.365299   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:07.433864   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:07.433882   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:07.445039   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:07.445055   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:07.509195   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:07.500621   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.500993   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.502681   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.503001   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.504545   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:07.500621   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.500993   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.502681   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.503001   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.504545   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:10.010342   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:10.026847   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:10.026923   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:10.055758   44722 cri.go:89] found id: ""
	I1213 18:45:10.055773   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.055781   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:10.055786   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:10.055847   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:10.084492   44722 cri.go:89] found id: ""
	I1213 18:45:10.084508   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.084515   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:10.084521   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:10.084579   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:10.124733   44722 cri.go:89] found id: ""
	I1213 18:45:10.124748   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.124756   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:10.124760   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:10.124823   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:10.167562   44722 cri.go:89] found id: ""
	I1213 18:45:10.167575   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.167583   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:10.167588   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:10.167647   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:10.196162   44722 cri.go:89] found id: ""
	I1213 18:45:10.196178   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.196185   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:10.196190   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:10.196251   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:10.222349   44722 cri.go:89] found id: ""
	I1213 18:45:10.222362   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.222370   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:10.222375   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:10.222433   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:10.252822   44722 cri.go:89] found id: ""
	I1213 18:45:10.252838   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.252848   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:10.252856   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:10.252867   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:10.318555   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:10.318574   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:10.330833   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:10.330848   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:10.403119   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:10.391784   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.392505   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394095   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394656   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.396739   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:10.391784   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.392505   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394095   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394656   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.396739   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:10.403129   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:10.403139   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:10.476776   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:10.476796   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:13.006030   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:13.016994   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:13.017078   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:13.047302   44722 cri.go:89] found id: ""
	I1213 18:45:13.047316   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.047322   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:13.047327   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:13.047390   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:13.072990   44722 cri.go:89] found id: ""
	I1213 18:45:13.073014   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.073024   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:13.073029   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:13.073086   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:13.104144   44722 cri.go:89] found id: ""
	I1213 18:45:13.104158   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.104165   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:13.104169   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:13.104233   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:13.133122   44722 cri.go:89] found id: ""
	I1213 18:45:13.133135   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.133141   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:13.133147   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:13.133228   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:13.165373   44722 cri.go:89] found id: ""
	I1213 18:45:13.165399   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.165406   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:13.165411   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:13.165473   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:13.191991   44722 cri.go:89] found id: ""
	I1213 18:45:13.192004   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.192012   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:13.192017   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:13.192082   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:13.217774   44722 cri.go:89] found id: ""
	I1213 18:45:13.217788   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.217795   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:13.217802   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:13.217813   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:13.284517   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:13.275477   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.276368   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278192   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278786   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.280431   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:13.275477   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.276368   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278192   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278786   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.280431   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:13.284527   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:13.284538   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:13.353730   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:13.353749   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:13.384210   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:13.384225   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:13.452832   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:13.452849   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:15.964206   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:15.976388   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:15.976453   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:16.006122   44722 cri.go:89] found id: ""
	I1213 18:45:16.006136   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.006143   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:16.006149   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:16.006211   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:16.031686   44722 cri.go:89] found id: ""
	I1213 18:45:16.031700   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.031707   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:16.031712   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:16.031768   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:16.057702   44722 cri.go:89] found id: ""
	I1213 18:45:16.057715   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.057722   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:16.057728   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:16.057783   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:16.090888   44722 cri.go:89] found id: ""
	I1213 18:45:16.090913   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.090921   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:16.090927   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:16.090997   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:16.128051   44722 cri.go:89] found id: ""
	I1213 18:45:16.128075   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.128083   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:16.128089   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:16.128160   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:16.157962   44722 cri.go:89] found id: ""
	I1213 18:45:16.157986   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.157993   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:16.157999   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:16.158057   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:16.184049   44722 cri.go:89] found id: ""
	I1213 18:45:16.184063   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.184070   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:16.184077   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:16.184088   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:16.250129   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:16.250149   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:16.261107   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:16.261125   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:16.330408   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:16.321894   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.322673   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324350   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324661   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.326266   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:16.321894   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.322673   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324350   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324661   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.326266   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:16.330418   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:16.330428   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:16.398576   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:16.398594   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:18.928496   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:18.938797   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:18.938873   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:18.964909   44722 cri.go:89] found id: ""
	I1213 18:45:18.964924   44722 logs.go:282] 0 containers: []
	W1213 18:45:18.964932   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:18.964939   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:18.964999   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:18.991414   44722 cri.go:89] found id: ""
	I1213 18:45:18.991428   44722 logs.go:282] 0 containers: []
	W1213 18:45:18.991446   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:18.991451   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:18.991508   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:19.021961   44722 cri.go:89] found id: ""
	I1213 18:45:19.021976   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.021983   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:19.021988   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:19.022055   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:19.046931   44722 cri.go:89] found id: ""
	I1213 18:45:19.046945   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.046952   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:19.046957   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:19.047013   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:19.072683   44722 cri.go:89] found id: ""
	I1213 18:45:19.072696   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.072703   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:19.072708   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:19.072778   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:19.100627   44722 cri.go:89] found id: ""
	I1213 18:45:19.100643   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.100651   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:19.100656   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:19.100720   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:19.130142   44722 cri.go:89] found id: ""
	I1213 18:45:19.130157   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.130163   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:19.130171   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:19.130182   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:19.197474   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:19.197494   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:19.208889   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:19.208908   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:19.274541   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:19.265647   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.266238   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.267928   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.268736   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.270556   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:19.265647   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.266238   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.267928   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.268736   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.270556   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:19.274551   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:19.274561   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:19.342919   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:19.342938   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:21.872871   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:21.883492   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:21.883550   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:21.910011   44722 cri.go:89] found id: ""
	I1213 18:45:21.910025   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.910032   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:21.910037   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:21.910094   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:21.935440   44722 cri.go:89] found id: ""
	I1213 18:45:21.935454   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.935461   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:21.935476   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:21.935535   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:21.970166   44722 cri.go:89] found id: ""
	I1213 18:45:21.970181   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.970188   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:21.970193   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:21.970254   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:21.996521   44722 cri.go:89] found id: ""
	I1213 18:45:21.996544   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.996552   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:21.996557   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:21.996625   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:22.026015   44722 cri.go:89] found id: ""
	I1213 18:45:22.026030   44722 logs.go:282] 0 containers: []
	W1213 18:45:22.026048   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:22.026054   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:22.026136   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:22.052512   44722 cri.go:89] found id: ""
	I1213 18:45:22.052526   44722 logs.go:282] 0 containers: []
	W1213 18:45:22.052533   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:22.052547   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:22.052634   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:22.087211   44722 cri.go:89] found id: ""
	I1213 18:45:22.087242   44722 logs.go:282] 0 containers: []
	W1213 18:45:22.087249   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:22.087258   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:22.087268   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:22.161238   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:22.161256   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:22.172311   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:22.172327   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:22.235337   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:22.226748   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.227404   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229399   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229780   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.231333   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:22.226748   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.227404   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229399   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229780   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.231333   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:22.235349   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:22.235360   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:22.304771   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:22.304790   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:24.834025   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:24.844561   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:24.844623   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:24.869497   44722 cri.go:89] found id: ""
	I1213 18:45:24.869512   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.869519   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:24.869524   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:24.869582   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:24.899663   44722 cri.go:89] found id: ""
	I1213 18:45:24.899677   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.899685   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:24.899690   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:24.899750   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:24.929664   44722 cri.go:89] found id: ""
	I1213 18:45:24.929678   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.929685   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:24.929689   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:24.929748   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:24.954943   44722 cri.go:89] found id: ""
	I1213 18:45:24.954957   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.954964   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:24.954969   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:24.955024   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:24.981964   44722 cri.go:89] found id: ""
	I1213 18:45:24.981978   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.981985   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:24.981991   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:24.982048   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:25.024491   44722 cri.go:89] found id: ""
	I1213 18:45:25.024507   44722 logs.go:282] 0 containers: []
	W1213 18:45:25.024514   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:25.024519   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:25.024587   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:25.059717   44722 cri.go:89] found id: ""
	I1213 18:45:25.059732   44722 logs.go:282] 0 containers: []
	W1213 18:45:25.059740   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:25.059747   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:25.059758   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:25.137684   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:25.137709   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:25.152450   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:25.152466   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:25.224073   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:25.215282   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.215897   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.217852   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.218715   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.219908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:25.215282   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.215897   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.217852   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.218715   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.219908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:25.224083   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:25.224095   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:25.293145   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:25.293164   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:27.825368   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:27.835872   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:27.835932   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:27.861658   44722 cri.go:89] found id: ""
	I1213 18:45:27.861672   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.861679   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:27.861684   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:27.861742   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:27.886615   44722 cri.go:89] found id: ""
	I1213 18:45:27.886629   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.886636   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:27.886641   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:27.886697   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:27.915655   44722 cri.go:89] found id: ""
	I1213 18:45:27.915669   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.915676   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:27.915681   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:27.915743   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:27.940463   44722 cri.go:89] found id: ""
	I1213 18:45:27.940477   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.940484   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:27.940489   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:27.940546   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:27.970042   44722 cri.go:89] found id: ""
	I1213 18:45:27.970056   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.970063   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:27.970068   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:27.970125   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:27.996687   44722 cri.go:89] found id: ""
	I1213 18:45:27.996702   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.996708   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:27.996714   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:27.996773   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:28.025848   44722 cri.go:89] found id: ""
	I1213 18:45:28.025861   44722 logs.go:282] 0 containers: []
	W1213 18:45:28.025868   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:28.025876   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:28.025894   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:28.104265   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:28.104292   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:28.116838   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:28.116855   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:28.189318   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:28.180911   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.181676   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.183358   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.184009   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.185382   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:28.180911   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.181676   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.183358   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.184009   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.185382   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:28.189329   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:28.189340   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:28.257409   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:28.257428   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:30.789289   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:30.799688   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:30.799748   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:30.828658   44722 cri.go:89] found id: ""
	I1213 18:45:30.828672   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.828680   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:30.828688   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:30.828748   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:30.854242   44722 cri.go:89] found id: ""
	I1213 18:45:30.854256   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.854263   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:30.854268   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:30.854325   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:30.879211   44722 cri.go:89] found id: ""
	I1213 18:45:30.879225   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.879235   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:30.879241   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:30.879298   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:30.908380   44722 cri.go:89] found id: ""
	I1213 18:45:30.908394   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.908401   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:30.908406   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:30.908462   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:30.934004   44722 cri.go:89] found id: ""
	I1213 18:45:30.934023   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.934030   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:30.934035   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:30.934094   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:30.959088   44722 cri.go:89] found id: ""
	I1213 18:45:30.959101   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.959108   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:30.959113   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:30.959172   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:30.987128   44722 cri.go:89] found id: ""
	I1213 18:45:30.987142   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.987149   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:30.987156   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:30.987167   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:30.999233   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:30.999253   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:31.070686   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:31.062512   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.063387   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.064956   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.065476   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.066859   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:31.062512   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.063387   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.064956   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.065476   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.066859   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:31.070697   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:31.070708   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:31.149373   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:31.149393   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:31.182467   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:31.182484   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:33.754920   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:33.764984   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:33.765061   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:33.789610   44722 cri.go:89] found id: ""
	I1213 18:45:33.789624   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.789630   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:33.789635   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:33.789694   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:33.814723   44722 cri.go:89] found id: ""
	I1213 18:45:33.814738   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.814744   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:33.814749   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:33.814811   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:33.841835   44722 cri.go:89] found id: ""
	I1213 18:45:33.841848   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.841855   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:33.841860   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:33.841917   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:33.875847   44722 cri.go:89] found id: ""
	I1213 18:45:33.875871   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.875878   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:33.875885   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:33.875953   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:33.903037   44722 cri.go:89] found id: ""
	I1213 18:45:33.903050   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.903057   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:33.903062   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:33.903135   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:33.934423   44722 cri.go:89] found id: ""
	I1213 18:45:33.934437   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.934444   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:33.934449   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:33.934522   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:33.959437   44722 cri.go:89] found id: ""
	I1213 18:45:33.959450   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.959458   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:33.959465   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:33.959475   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:34.024568   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:34.024587   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:34.036558   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:34.036583   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:34.113960   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:34.105595   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.106445   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.107646   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.108191   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.109855   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:34.105595   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.106445   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.107646   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.108191   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.109855   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:34.113970   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:34.113988   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:34.186879   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:34.186900   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:36.717771   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:36.731405   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:36.731462   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:36.758511   44722 cri.go:89] found id: ""
	I1213 18:45:36.758525   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.758532   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:36.758537   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:36.758595   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:36.784601   44722 cri.go:89] found id: ""
	I1213 18:45:36.784614   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.784621   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:36.784626   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:36.784683   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:36.813889   44722 cri.go:89] found id: ""
	I1213 18:45:36.813903   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.813910   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:36.813915   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:36.813974   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:36.840673   44722 cri.go:89] found id: ""
	I1213 18:45:36.840687   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.840695   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:36.840701   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:36.840758   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:36.866658   44722 cri.go:89] found id: ""
	I1213 18:45:36.866673   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.866679   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:36.866684   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:36.866761   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:36.893289   44722 cri.go:89] found id: ""
	I1213 18:45:36.893303   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.893311   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:36.893316   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:36.893377   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:36.920158   44722 cri.go:89] found id: ""
	I1213 18:45:36.920171   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.920178   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:36.920186   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:36.920196   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:36.987002   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:36.987021   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:36.999105   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:36.999128   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:37.072378   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:37.063848   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.064510   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066038   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066549   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.067999   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:37.063848   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.064510   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066038   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066549   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.067999   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:37.072390   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:37.072401   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:37.145027   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:37.145047   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:39.682857   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:39.693055   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:39.693114   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:39.717750   44722 cri.go:89] found id: ""
	I1213 18:45:39.717763   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.717771   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:39.717776   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:39.717831   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:39.748452   44722 cri.go:89] found id: ""
	I1213 18:45:39.748466   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.748473   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:39.748478   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:39.748535   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:39.775686   44722 cri.go:89] found id: ""
	I1213 18:45:39.775700   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.775706   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:39.775712   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:39.775773   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:39.801049   44722 cri.go:89] found id: ""
	I1213 18:45:39.801063   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.801070   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:39.801075   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:39.801132   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:39.829545   44722 cri.go:89] found id: ""
	I1213 18:45:39.829559   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.829566   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:39.829571   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:39.829627   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:39.855870   44722 cri.go:89] found id: ""
	I1213 18:45:39.855883   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.855890   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:39.855895   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:39.855951   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:39.880432   44722 cri.go:89] found id: ""
	I1213 18:45:39.880446   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.880452   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:39.880460   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:39.880471   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:39.944602   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:39.936636   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.937539   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939109   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939488   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.940927   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:39.936636   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.937539   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939109   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939488   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.940927   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:39.944613   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:39.944623   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:40.014162   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:40.014186   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:40.052762   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:40.052780   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:40.123344   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:40.123364   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:42.639745   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:42.650139   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:42.650196   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:42.674810   44722 cri.go:89] found id: ""
	I1213 18:45:42.674824   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.674831   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:42.674836   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:42.674896   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:42.705498   44722 cri.go:89] found id: ""
	I1213 18:45:42.705512   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.705519   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:42.705524   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:42.705590   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:42.731558   44722 cri.go:89] found id: ""
	I1213 18:45:42.731572   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.731586   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:42.731591   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:42.731650   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:42.758070   44722 cri.go:89] found id: ""
	I1213 18:45:42.758084   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.758098   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:42.758103   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:42.758163   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:42.784043   44722 cri.go:89] found id: ""
	I1213 18:45:42.784057   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.784065   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:42.784069   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:42.784130   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:42.810580   44722 cri.go:89] found id: ""
	I1213 18:45:42.810594   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.810602   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:42.810607   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:42.810667   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:42.837217   44722 cri.go:89] found id: ""
	I1213 18:45:42.837230   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.837237   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:42.837244   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:42.837255   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:42.869269   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:42.869289   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:42.937246   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:42.937265   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:42.948535   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:42.948551   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:43.014525   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:43.006257   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.006741   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008386   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008729   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.010279   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:43.006257   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.006741   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008386   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008729   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.010279   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:43.014550   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:43.014561   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:45.585650   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:45.596016   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:45.596081   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:45.621732   44722 cri.go:89] found id: ""
	I1213 18:45:45.621746   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.621753   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:45.621758   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:45.621828   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:45.647999   44722 cri.go:89] found id: ""
	I1213 18:45:45.648013   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.648020   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:45.648025   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:45.648084   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:45.672656   44722 cri.go:89] found id: ""
	I1213 18:45:45.672669   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.672676   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:45.672681   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:45.672737   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:45.697633   44722 cri.go:89] found id: ""
	I1213 18:45:45.697648   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.697655   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:45.697660   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:45.697725   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:45.722938   44722 cri.go:89] found id: ""
	I1213 18:45:45.722957   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.722964   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:45.722969   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:45.723027   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:45.753044   44722 cri.go:89] found id: ""
	I1213 18:45:45.753057   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.753064   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:45.753069   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:45.753139   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:45.777945   44722 cri.go:89] found id: ""
	I1213 18:45:45.777959   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.777966   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:45.777974   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:45.777984   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:45.788618   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:45.788634   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:45.856342   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:45.847135   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.847845   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.849739   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.850385   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.851966   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:45.847135   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.847845   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.849739   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.850385   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.851966   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:45.856353   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:45.856363   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:45.925928   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:45.925948   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:45.955270   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:45.955286   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:48.526489   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:48.536804   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:48.536878   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:48.564096   44722 cri.go:89] found id: ""
	I1213 18:45:48.564110   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.564116   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:48.564121   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:48.564180   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:48.589084   44722 cri.go:89] found id: ""
	I1213 18:45:48.589098   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.589105   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:48.589117   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:48.589174   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:48.614957   44722 cri.go:89] found id: ""
	I1213 18:45:48.614971   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.614978   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:48.614989   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:48.615045   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:48.639705   44722 cri.go:89] found id: ""
	I1213 18:45:48.639719   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.639725   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:48.639730   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:48.639789   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:48.665151   44722 cri.go:89] found id: ""
	I1213 18:45:48.665165   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.665171   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:48.665176   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:48.665237   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:48.691765   44722 cri.go:89] found id: ""
	I1213 18:45:48.691779   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.691786   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:48.691791   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:48.691846   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:48.718076   44722 cri.go:89] found id: ""
	I1213 18:45:48.718089   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.718096   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:48.718104   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:48.718115   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:48.729150   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:48.729166   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:48.795759   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:48.787631   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.788312   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790025   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790514   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.791993   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:48.787631   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.788312   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790025   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790514   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.791993   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:48.795769   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:48.795780   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:48.865101   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:48.865123   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:48.893317   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:48.893332   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:51.461504   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:51.471540   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:51.471603   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:51.496535   44722 cri.go:89] found id: ""
	I1213 18:45:51.496549   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.496556   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:51.496561   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:51.496620   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:51.523516   44722 cri.go:89] found id: ""
	I1213 18:45:51.523530   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.523537   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:51.523542   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:51.523601   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:51.548779   44722 cri.go:89] found id: ""
	I1213 18:45:51.548792   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.548799   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:51.548804   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:51.548862   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:51.574426   44722 cri.go:89] found id: ""
	I1213 18:45:51.574439   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.574446   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:51.574451   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:51.574508   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:51.601095   44722 cri.go:89] found id: ""
	I1213 18:45:51.601116   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.601123   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:51.601128   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:51.601185   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:51.630300   44722 cri.go:89] found id: ""
	I1213 18:45:51.630314   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.630321   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:51.630326   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:51.630388   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:51.658180   44722 cri.go:89] found id: ""
	I1213 18:45:51.658194   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.658200   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:51.658208   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:51.658218   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:51.727599   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:51.727617   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:51.740526   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:51.740543   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:51.824581   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:51.815003   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.815673   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.817551   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.818376   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.820029   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:51.815003   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.815673   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.817551   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.818376   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.820029   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:51.824598   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:51.824608   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:51.895130   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:51.895149   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:54.423725   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:54.434109   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:54.434167   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:54.461075   44722 cri.go:89] found id: ""
	I1213 18:45:54.461096   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.461104   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:54.461109   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:54.461169   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:54.486465   44722 cri.go:89] found id: ""
	I1213 18:45:54.486479   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.486485   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:54.486490   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:54.486545   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:54.512518   44722 cri.go:89] found id: ""
	I1213 18:45:54.512532   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.512539   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:54.512556   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:54.512613   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:54.539809   44722 cri.go:89] found id: ""
	I1213 18:45:54.539823   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.539830   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:54.539835   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:54.539897   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:54.570146   44722 cri.go:89] found id: ""
	I1213 18:45:54.570159   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.570166   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:54.570170   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:54.570224   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:54.596027   44722 cri.go:89] found id: ""
	I1213 18:45:54.596041   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.596047   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:54.596052   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:54.596113   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:54.623337   44722 cri.go:89] found id: ""
	I1213 18:45:54.623351   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.623358   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:54.623367   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:54.623382   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:54.654287   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:54.654305   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:54.720405   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:54.720426   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:54.731640   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:54.731656   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:54.800062   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:54.792084   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.792588   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794071   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794411   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.795882   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:54.792084   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.792588   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794071   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794411   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.795882   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:54.800085   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:54.800095   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:57.370530   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:57.381975   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:57.382044   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:57.410748   44722 cri.go:89] found id: ""
	I1213 18:45:57.410761   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.410768   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:57.410773   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:57.410834   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:57.437110   44722 cri.go:89] found id: ""
	I1213 18:45:57.437123   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.437130   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:57.437135   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:57.437196   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:57.463356   44722 cri.go:89] found id: ""
	I1213 18:45:57.463370   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.463377   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:57.463381   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:57.463436   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:57.488350   44722 cri.go:89] found id: ""
	I1213 18:45:57.488364   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.488381   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:57.488387   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:57.488442   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:57.513926   44722 cri.go:89] found id: ""
	I1213 18:45:57.513939   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.513951   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:57.513956   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:57.514013   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:57.539641   44722 cri.go:89] found id: ""
	I1213 18:45:57.539655   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.539661   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:57.539666   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:57.539722   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:57.565672   44722 cri.go:89] found id: ""
	I1213 18:45:57.565686   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.565693   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:57.565700   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:57.565710   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:57.637461   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:57.637486   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:57.648402   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:57.648418   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:57.716551   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:57.708424   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.708971   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.710676   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.711086   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.712583   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:57.708424   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.708971   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.710676   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.711086   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.712583   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:57.716567   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:57.716579   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:57.785661   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:57.785681   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:00.318382   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:00.335223   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:00.335290   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:00.415052   44722 cri.go:89] found id: ""
	I1213 18:46:00.415068   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.415075   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:00.415080   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:00.415144   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:00.448025   44722 cri.go:89] found id: ""
	I1213 18:46:00.448039   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.448047   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:00.448052   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:00.448120   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:00.478830   44722 cri.go:89] found id: ""
	I1213 18:46:00.478844   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.478851   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:00.478856   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:00.478915   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:00.510923   44722 cri.go:89] found id: ""
	I1213 18:46:00.510943   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.510951   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:00.510956   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:00.511018   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:00.538053   44722 cri.go:89] found id: ""
	I1213 18:46:00.538068   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.538075   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:00.538080   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:00.538139   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:00.563080   44722 cri.go:89] found id: ""
	I1213 18:46:00.563094   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.563101   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:00.563107   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:00.563162   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:00.588696   44722 cri.go:89] found id: ""
	I1213 18:46:00.588710   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.588716   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:00.588724   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:00.588734   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:00.655165   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:00.655185   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:00.667201   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:00.667217   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:00.732035   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:00.723385   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.723987   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.725839   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.726393   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.728162   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:00.723385   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.723987   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.725839   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.726393   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.728162   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:00.732045   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:00.732055   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:00.803574   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:00.803592   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:03.335736   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:03.347198   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:03.347266   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:03.376587   44722 cri.go:89] found id: ""
	I1213 18:46:03.376600   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.376625   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:03.376630   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:03.376698   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:03.407284   44722 cri.go:89] found id: ""
	I1213 18:46:03.407298   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.407305   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:03.407310   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:03.407379   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:03.432194   44722 cri.go:89] found id: ""
	I1213 18:46:03.432219   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.432226   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:03.432231   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:03.432297   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:03.461490   44722 cri.go:89] found id: ""
	I1213 18:46:03.461504   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.461520   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:03.461528   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:03.461586   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:03.486500   44722 cri.go:89] found id: ""
	I1213 18:46:03.486514   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.486521   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:03.486526   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:03.486580   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:03.516064   44722 cri.go:89] found id: ""
	I1213 18:46:03.516079   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.516095   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:03.516101   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:03.516173   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:03.543241   44722 cri.go:89] found id: ""
	I1213 18:46:03.543261   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.543269   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:03.543277   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:03.543288   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:03.614698   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:03.606014   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.606848   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.608572   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.609328   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.610814   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:03.606014   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.606848   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.608572   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.609328   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.610814   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:03.614708   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:03.614719   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:03.683610   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:03.683629   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:03.714101   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:03.714118   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:03.783821   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:03.783841   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:06.296661   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:06.307402   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:06.307473   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:06.342139   44722 cri.go:89] found id: ""
	I1213 18:46:06.342152   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.342159   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:06.342164   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:06.342223   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:06.376710   44722 cri.go:89] found id: ""
	I1213 18:46:06.376724   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.376730   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:06.376735   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:06.376793   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:06.412732   44722 cri.go:89] found id: ""
	I1213 18:46:06.412746   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.412753   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:06.412758   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:06.412814   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:06.445341   44722 cri.go:89] found id: ""
	I1213 18:46:06.445354   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.445360   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:06.445365   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:06.445423   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:06.470587   44722 cri.go:89] found id: ""
	I1213 18:46:06.470601   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.470608   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:06.470613   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:06.470667   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:06.495331   44722 cri.go:89] found id: ""
	I1213 18:46:06.495347   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.495354   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:06.495360   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:06.495420   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:06.521489   44722 cri.go:89] found id: ""
	I1213 18:46:06.521503   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.521510   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:06.521517   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:06.521531   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:06.552192   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:06.552209   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:06.618284   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:06.618302   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:06.630541   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:06.630558   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:06.702858   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:06.695039   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.695585   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697148   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697474   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.698996   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:06.695039   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.695585   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697148   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697474   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.698996   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:06.702868   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:06.702881   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:09.275499   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:09.285598   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:09.285657   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:09.313861   44722 cri.go:89] found id: ""
	I1213 18:46:09.313885   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.313893   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:09.313898   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:09.313956   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:09.346645   44722 cri.go:89] found id: ""
	I1213 18:46:09.346661   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.346671   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:09.346677   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:09.346742   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:09.381723   44722 cri.go:89] found id: ""
	I1213 18:46:09.381743   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.381750   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:09.381755   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:09.381842   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:09.415093   44722 cri.go:89] found id: ""
	I1213 18:46:09.415106   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.415113   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:09.415118   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:09.415178   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:09.440412   44722 cri.go:89] found id: ""
	I1213 18:46:09.440426   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.440433   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:09.440438   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:09.440495   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:09.469945   44722 cri.go:89] found id: ""
	I1213 18:46:09.469959   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.469965   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:09.469971   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:09.470037   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:09.495452   44722 cri.go:89] found id: ""
	I1213 18:46:09.495478   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.495486   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:09.495494   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:09.495505   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:09.507701   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:09.507716   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:09.577735   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:09.564499   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.564927   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571154   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571832   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.573056   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:09.564499   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.564927   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571154   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571832   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.573056   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:09.577745   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:09.577756   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:09.650543   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:09.650564   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:09.680040   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:09.680057   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:12.249315   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:12.259200   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:12.259257   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:12.284607   44722 cri.go:89] found id: ""
	I1213 18:46:12.284620   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.284627   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:12.284632   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:12.284697   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:12.318167   44722 cri.go:89] found id: ""
	I1213 18:46:12.318180   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.318187   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:12.318191   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:12.318249   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:12.361187   44722 cri.go:89] found id: ""
	I1213 18:46:12.361201   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.361208   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:12.361213   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:12.361270   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:12.396970   44722 cri.go:89] found id: ""
	I1213 18:46:12.396983   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.396990   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:12.396995   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:12.397098   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:12.423202   44722 cri.go:89] found id: ""
	I1213 18:46:12.423215   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.423222   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:12.423227   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:12.423286   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:12.448231   44722 cri.go:89] found id: ""
	I1213 18:46:12.448245   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.448252   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:12.448257   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:12.448314   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:12.477927   44722 cri.go:89] found id: ""
	I1213 18:46:12.477941   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.477949   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:12.477956   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:12.477966   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:12.547816   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:12.547834   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:12.559262   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:12.559280   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:12.622773   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:12.614428   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.615068   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.616576   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.617216   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.618857   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:12.614428   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.615068   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.616576   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.617216   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.618857   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:12.622783   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:12.622793   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:12.692295   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:12.692312   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:15.224550   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:15.235025   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:15.235085   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:15.261669   44722 cri.go:89] found id: ""
	I1213 18:46:15.261683   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.261690   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:15.261695   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:15.261755   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:15.290899   44722 cri.go:89] found id: ""
	I1213 18:46:15.290913   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.290920   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:15.290925   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:15.290979   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:15.317538   44722 cri.go:89] found id: ""
	I1213 18:46:15.317551   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.317558   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:15.317563   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:15.317621   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:15.359563   44722 cri.go:89] found id: ""
	I1213 18:46:15.359577   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.359584   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:15.359589   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:15.359645   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:15.395203   44722 cri.go:89] found id: ""
	I1213 18:46:15.395216   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.395223   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:15.395228   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:15.395288   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:15.428291   44722 cri.go:89] found id: ""
	I1213 18:46:15.428304   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.428311   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:15.428316   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:15.428372   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:15.453931   44722 cri.go:89] found id: ""
	I1213 18:46:15.453945   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.453951   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:15.453958   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:15.453969   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:15.521521   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:15.512931   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.513463   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515174   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515484   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.517840   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:15.512931   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.513463   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515174   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515484   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.517840   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:15.521531   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:15.521541   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:15.591139   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:15.591160   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:15.622465   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:15.622481   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:15.691330   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:15.691348   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:18.203416   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:18.213952   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:18.214025   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:18.239778   44722 cri.go:89] found id: ""
	I1213 18:46:18.239792   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.239808   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:18.239814   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:18.239879   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:18.264101   44722 cri.go:89] found id: ""
	I1213 18:46:18.264114   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.264121   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:18.264126   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:18.264185   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:18.289302   44722 cri.go:89] found id: ""
	I1213 18:46:18.289316   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.289323   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:18.289328   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:18.289386   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:18.316088   44722 cri.go:89] found id: ""
	I1213 18:46:18.316101   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.316108   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:18.316116   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:18.316174   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:18.351768   44722 cri.go:89] found id: ""
	I1213 18:46:18.351781   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.351788   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:18.351792   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:18.351846   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:18.382427   44722 cri.go:89] found id: ""
	I1213 18:46:18.382441   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.382447   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:18.382452   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:18.382509   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:18.410191   44722 cri.go:89] found id: ""
	I1213 18:46:18.410205   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.410212   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:18.410220   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:18.410230   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:18.473809   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:18.464747   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.465711   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467472   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467819   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.469591   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:18.464747   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.465711   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467472   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467819   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.469591   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:18.473819   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:18.473837   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:18.545360   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:18.545378   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:18.573170   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:18.573186   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:18.638179   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:18.638198   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:21.149461   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:21.159925   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:21.159987   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:21.185083   44722 cri.go:89] found id: ""
	I1213 18:46:21.185097   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.185104   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:21.185109   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:21.185169   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:21.210110   44722 cri.go:89] found id: ""
	I1213 18:46:21.210124   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.210131   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:21.210136   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:21.210199   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:21.235437   44722 cri.go:89] found id: ""
	I1213 18:46:21.235450   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.235457   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:21.235462   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:21.235518   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:21.264027   44722 cri.go:89] found id: ""
	I1213 18:46:21.264041   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.264061   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:21.264067   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:21.264134   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:21.291534   44722 cri.go:89] found id: ""
	I1213 18:46:21.291548   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.291567   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:21.291571   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:21.291638   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:21.321987   44722 cri.go:89] found id: ""
	I1213 18:46:21.322010   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.322018   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:21.322023   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:21.322088   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:21.354190   44722 cri.go:89] found id: ""
	I1213 18:46:21.354218   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.354225   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:21.354232   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:21.354242   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:21.432072   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:21.432092   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:21.443924   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:21.443941   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:21.512256   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:21.503676   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.504240   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506119   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506493   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.508024   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:21.503676   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.504240   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506119   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506493   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.508024   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:21.512269   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:21.512281   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:21.584867   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:21.584887   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:24.118323   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:24.129552   44722 kubeadm.go:602] duration metric: took 4m2.563511626s to restartPrimaryControlPlane
	W1213 18:46:24.129614   44722 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 18:46:24.129691   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 18:46:24.541036   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 18:46:24.553708   44722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 18:46:24.561742   44722 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 18:46:24.561810   44722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:46:24.569735   44722 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 18:46:24.569745   44722 kubeadm.go:158] found existing configuration files:
	
	I1213 18:46:24.569794   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:46:24.577570   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 18:46:24.577624   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 18:46:24.584990   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:46:24.592683   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 18:46:24.592744   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:46:24.600210   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:46:24.607772   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 18:46:24.607829   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:46:24.615311   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:46:24.623206   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 18:46:24.623270   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:46:24.631351   44722 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 18:46:24.746076   44722 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 18:46:24.746546   44722 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 18:46:24.812383   44722 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 18:50:26.971755   44722 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 18:50:26.971788   44722 kubeadm.go:319] 
	I1213 18:50:26.971891   44722 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 18:50:26.975722   44722 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 18:50:26.975775   44722 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 18:50:26.975864   44722 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 18:50:26.975918   44722 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 18:50:26.975952   44722 kubeadm.go:319] OS: Linux
	I1213 18:50:26.975995   44722 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 18:50:26.976042   44722 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 18:50:26.976088   44722 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 18:50:26.976134   44722 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 18:50:26.976181   44722 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 18:50:26.976228   44722 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 18:50:26.976271   44722 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 18:50:26.976318   44722 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 18:50:26.976374   44722 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 18:50:26.976446   44722 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 18:50:26.976550   44722 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 18:50:26.976642   44722 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 18:50:26.976705   44722 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 18:50:26.979839   44722 out.go:252]   - Generating certificates and keys ...
	I1213 18:50:26.979929   44722 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 18:50:26.979994   44722 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 18:50:26.980071   44722 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 18:50:26.980130   44722 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 18:50:26.980204   44722 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 18:50:26.980256   44722 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 18:50:26.980323   44722 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 18:50:26.980389   44722 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 18:50:26.980463   44722 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 18:50:26.980534   44722 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 18:50:26.980570   44722 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 18:50:26.980625   44722 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 18:50:26.980698   44722 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 18:50:26.980766   44722 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 18:50:26.980827   44722 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 18:50:26.980893   44722 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 18:50:26.980947   44722 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 18:50:26.981062   44722 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 18:50:26.981134   44722 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 18:50:26.984046   44722 out.go:252]   - Booting up control plane ...
	I1213 18:50:26.984213   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 18:50:26.984302   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 18:50:26.984406   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 18:50:26.984526   44722 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 18:50:26.984621   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 18:50:26.984728   44722 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 18:50:26.984811   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 18:50:26.984849   44722 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 18:50:26.984978   44722 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 18:50:26.985109   44722 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 18:50:26.985193   44722 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000261471s
	I1213 18:50:26.985199   44722 kubeadm.go:319] 
	I1213 18:50:26.985265   44722 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 18:50:26.985304   44722 kubeadm.go:319] 	- The kubelet is not running
	I1213 18:50:26.985407   44722 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 18:50:26.985410   44722 kubeadm.go:319] 
	I1213 18:50:26.985524   44722 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 18:50:26.985559   44722 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 18:50:26.985594   44722 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 18:50:26.985645   44722 kubeadm.go:319] 
	W1213 18:50:26.985723   44722 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000261471s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 18:50:26.989121   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 18:50:27.401657   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 18:50:27.414174   44722 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 18:50:27.414227   44722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:50:27.422069   44722 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 18:50:27.422079   44722 kubeadm.go:158] found existing configuration files:
	
	I1213 18:50:27.422131   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:50:27.429688   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 18:50:27.429740   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 18:50:27.436848   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:50:27.444475   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 18:50:27.444539   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:50:27.451626   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:50:27.458858   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 18:50:27.458912   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:50:27.466216   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:50:27.473793   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 18:50:27.473846   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:50:27.481268   44722 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 18:50:27.532748   44722 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 18:50:27.532805   44722 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 18:50:27.602576   44722 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 18:50:27.602639   44722 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 18:50:27.602674   44722 kubeadm.go:319] OS: Linux
	I1213 18:50:27.602718   44722 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 18:50:27.602765   44722 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 18:50:27.602811   44722 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 18:50:27.602858   44722 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 18:50:27.602905   44722 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 18:50:27.602952   44722 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 18:50:27.602996   44722 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 18:50:27.603043   44722 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 18:50:27.603088   44722 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 18:50:27.670270   44722 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 18:50:27.670407   44722 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 18:50:27.670497   44722 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 18:50:27.681577   44722 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 18:50:27.686860   44722 out.go:252]   - Generating certificates and keys ...
	I1213 18:50:27.686961   44722 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 18:50:27.687031   44722 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 18:50:27.687115   44722 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 18:50:27.687184   44722 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 18:50:27.687264   44722 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 18:50:27.687325   44722 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 18:50:27.687398   44722 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 18:50:27.687471   44722 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 18:50:27.687593   44722 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 18:50:27.687675   44722 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 18:50:27.687715   44722 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 18:50:27.687778   44722 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 18:50:28.283128   44722 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 18:50:28.400218   44722 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 18:50:28.813695   44722 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 18:50:29.036602   44722 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 18:50:29.078002   44722 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 18:50:29.078680   44722 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 18:50:29.081273   44722 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 18:50:29.084492   44722 out.go:252]   - Booting up control plane ...
	I1213 18:50:29.084588   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 18:50:29.084675   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 18:50:29.086298   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 18:50:29.101051   44722 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 18:50:29.101487   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 18:50:29.109109   44722 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 18:50:29.109586   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 18:50:29.109636   44722 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 18:50:29.237458   44722 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 18:50:29.237571   44722 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 18:54:29.237512   44722 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000245862s
	I1213 18:54:29.237544   44722 kubeadm.go:319] 
	I1213 18:54:29.237597   44722 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 18:54:29.237627   44722 kubeadm.go:319] 	- The kubelet is not running
	I1213 18:54:29.237724   44722 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 18:54:29.237728   44722 kubeadm.go:319] 
	I1213 18:54:29.237836   44722 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 18:54:29.237865   44722 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 18:54:29.237893   44722 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 18:54:29.237896   44722 kubeadm.go:319] 
	I1213 18:54:29.241945   44722 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 18:54:29.242401   44722 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 18:54:29.242519   44722 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 18:54:29.242782   44722 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 18:54:29.242790   44722 kubeadm.go:319] 
	I1213 18:54:29.242854   44722 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 18:54:29.242916   44722 kubeadm.go:403] duration metric: took 12m7.716453663s to StartCluster
	I1213 18:54:29.242947   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:54:29.243009   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:54:29.267936   44722 cri.go:89] found id: ""
	I1213 18:54:29.267953   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.267960   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:54:29.267966   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:54:29.268023   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:54:29.295961   44722 cri.go:89] found id: ""
	I1213 18:54:29.295975   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.295982   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:54:29.295987   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:54:29.296049   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:54:29.321287   44722 cri.go:89] found id: ""
	I1213 18:54:29.321301   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.321308   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:54:29.321313   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:54:29.321369   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:54:29.346752   44722 cri.go:89] found id: ""
	I1213 18:54:29.346766   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.346773   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:54:29.346778   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:54:29.346840   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:54:29.373200   44722 cri.go:89] found id: ""
	I1213 18:54:29.373214   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.373222   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:54:29.373227   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:54:29.373284   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:54:29.399377   44722 cri.go:89] found id: ""
	I1213 18:54:29.399390   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.399397   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:54:29.399403   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:54:29.399459   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:54:29.427837   44722 cri.go:89] found id: ""
	I1213 18:54:29.427851   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.427867   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:54:29.427876   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:54:29.427886   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:54:29.456109   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:54:29.456125   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:54:29.522138   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:54:29.522156   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:54:29.533671   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:54:29.533686   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:54:29.610367   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:54:29.601277   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.601976   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.603577   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.604094   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.605709   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:54:29.601277   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.601976   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.603577   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.604094   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.605709   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:54:29.610381   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:54:29.610392   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 18:54:29.688966   44722 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 18:54:29.689015   44722 out.go:285] * 
	W1213 18:54:29.689125   44722 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 18:54:29.689180   44722 out.go:285] * 
	W1213 18:54:29.691288   44722 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:54:29.696180   44722 out.go:203] 
	W1213 18:54:29.699069   44722 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 18:54:29.699113   44722 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 18:54:29.699131   44722 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 18:54:29.702236   44722 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862047918Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862080558Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862126375Z" level=info msg="Create NRI interface"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862224993Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.86223278Z" level=info msg="runtime interface created"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862244013Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862251471Z" level=info msg="runtime interface starting up..."
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862256895Z" level=info msg="starting plugins..."
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862268768Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862331636Z" level=info msg="No systemd watchdog enabled"
	Dec 13 18:42:19 functional-752103 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.818362642Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=dc50dc13-71bf-495d-a717-281bc180f2f6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.819294668Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=d8721ade-dce9-4153-a322-5ccd7819b97b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.81975854Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=490f044a-8303-4886-ba98-7360ebf1ca73 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.820179047Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=12624e30-2525-4636-9934-824ea63a04cd name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.82056529Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1e7d135f-0cd8-4d54-96f0-f28f4e7904d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.820930436Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=30771d6d-e5fc-49d6-aff6-138912d2988b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.821514235Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c45e1d7a-3ddb-41a5-9415-d5a2464cfd2b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.674061922Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=54566989-a940-4ea0-9cb7-11a5ead5fdab name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.67476674Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=9907b75f-aebf-4fc7-948f-3e37eff08342 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675335917Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a5823f6b-c128-468c-ad19-87c38dcb3493 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675801504Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=eb5c5b0d-734a-42c7-beea-2ae04458cd2c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676236125Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=dc8b8dc3-cec8-44a2-afbb-932c674af235 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676718434Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=fae4abe6-592a-492b-809b-edd01682c93f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.677348338Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=21883f8b-9b90-4bb8-9843-c91d88abb931 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:54:30.981796   21270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:30.982677   21270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:30.984474   21270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:30.985303   21270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:30.987211   21270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	[Dec13 18:42] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 18:54:31 up  1:37,  0 user,  load average: 0.15, 0.20, 0.30
	Linux functional-752103 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 18:54:28 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:28 functional-752103 kubelet[21080]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:28 functional-752103 kubelet[21080]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:28 functional-752103 kubelet[21080]: E1213 18:54:28.878399   21080 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:54:28 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:54:28 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:54:29 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 960.
	Dec 13 18:54:29 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:29 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:29 functional-752103 kubelet[21164]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:29 functional-752103 kubelet[21164]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:29 functional-752103 kubelet[21164]: E1213 18:54:29.647183   21164 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:54:29 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:54:29 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:54:30 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 13 18:54:30 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:30 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:30 functional-752103 kubelet[21187]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:30 functional-752103 kubelet[21187]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:30 functional-752103 kubelet[21187]: E1213 18:54:30.408808   21187 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:54:30 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:54:30 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:54:31 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 13 18:54:31 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:31 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103: exit status 2 (396.387426ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-752103" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-752103 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-752103 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (69.87535ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-752103 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-752103
helpers_test.go:244: (dbg) docker inspect functional-752103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	        "Created": "2025-12-13T18:27:36.869398923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:27:36.933863328Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hosts",
	        "LogPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b-json.log",
	        "Name": "/functional-752103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-752103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-752103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	                "LowerDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-752103",
	                "Source": "/var/lib/docker/volumes/functional-752103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-752103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-752103",
	                "name.minikube.sigs.k8s.io": "functional-752103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625ea12887c8956887678f2408d6edd5b98f62bce458a6906f4f662a3001a53b",
	            "SandboxKey": "/var/run/docker/netns/625ea12887c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-752103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:2c:83:4a:30:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84df48e9f7dac8c6a1b67709e5eea216d99d3f16eb50b96c7f0e4a82b3193d56",
	                    "EndpointID": "e69b1f9610d40396647a2d78f0170c31b9cd8e641fc8465e742649cccee8e591",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-752103",
	                        "d72b547cdcc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103: exit status 2 (323.187226ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-350101 image ls --format short --alsologtostderr                                                                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ ssh     │ functional-350101 ssh pgrep buildkitd                                                                                                             │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	│ image   │ functional-350101 image ls --format json --alsologtostderr                                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image ls --format table --alsologtostderr                                                                                       │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image build -t localhost/my-image:functional-350101 testdata/build --alsologtostderr                                            │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ image   │ functional-350101 image ls                                                                                                                        │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ delete  │ -p functional-350101                                                                                                                              │ functional-350101 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │ 13 Dec 25 18:27 UTC │
	│ start   │ -p functional-752103 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:27 UTC │                     │
	│ start   │ -p functional-752103 --alsologtostderr -v=8                                                                                                       │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:35 UTC │                     │
	│ cache   │ functional-752103 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache add registry.k8s.io/pause:latest                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache add minikube-local-cache-test:functional-752103                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ functional-752103 cache delete minikube-local-cache-test:functional-752103                                                                        │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl images                                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │                     │
	│ cache   │ functional-752103 cache reload                                                                                                                    │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ kubectl │ functional-752103 kubectl -- --context functional-752103 get pods                                                                                 │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │                     │
	│ start   │ -p functional-752103 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:42:16
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:42:16.832380   44722 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:42:16.832482   44722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:42:16.832486   44722 out.go:374] Setting ErrFile to fd 2...
	I1213 18:42:16.832490   44722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:42:16.832750   44722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:42:16.833154   44722 out.go:368] Setting JSON to false
	I1213 18:42:16.833990   44722 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5089,"bootTime":1765646248,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:42:16.834047   44722 start.go:143] virtualization:  
	I1213 18:42:16.838135   44722 out.go:179] * [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:42:16.841728   44722 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:42:16.841798   44722 notify.go:221] Checking for updates...
	I1213 18:42:16.848230   44722 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:42:16.851409   44722 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:42:16.854607   44722 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:42:16.857801   44722 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:42:16.860996   44722 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:42:16.864675   44722 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:42:16.864787   44722 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:42:16.894628   44722 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:42:16.894745   44722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:42:16.957351   44722 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 18:42:16.94760506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:42:16.957447   44722 docker.go:319] overlay module found
	I1213 18:42:16.960782   44722 out.go:179] * Using the docker driver based on existing profile
	I1213 18:42:16.963851   44722 start.go:309] selected driver: docker
	I1213 18:42:16.963862   44722 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:42:16.963972   44722 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:42:16.964069   44722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:42:17.021522   44722 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 18:42:17.012232642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:42:17.021951   44722 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 18:42:17.021974   44722 cni.go:84] Creating CNI manager for ""
	I1213 18:42:17.022024   44722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:42:17.022071   44722 start.go:353] cluster config:
	{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:42:17.025231   44722 out.go:179] * Starting "functional-752103" primary control-plane node in "functional-752103" cluster
	I1213 18:42:17.028293   44722 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:42:17.031266   44722 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:42:17.034129   44722 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:42:17.034163   44722 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 18:42:17.034171   44722 cache.go:65] Caching tarball of preloaded images
	I1213 18:42:17.034196   44722 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:42:17.034259   44722 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 18:42:17.034268   44722 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 18:42:17.034379   44722 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/config.json ...
	I1213 18:42:17.054759   44722 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 18:42:17.054770   44722 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 18:42:17.054784   44722 cache.go:243] Successfully downloaded all kic artifacts
	I1213 18:42:17.054813   44722 start.go:360] acquireMachinesLock for functional-752103: {Name:mkf4ec1d9e1836ef54983db4562aedfd1a9c51c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 18:42:17.054868   44722 start.go:364] duration metric: took 38.187µs to acquireMachinesLock for "functional-752103"
	I1213 18:42:17.054886   44722 start.go:96] Skipping create...Using existing machine configuration
	I1213 18:42:17.054891   44722 fix.go:54] fixHost starting: 
	I1213 18:42:17.055151   44722 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:42:17.071486   44722 fix.go:112] recreateIfNeeded on functional-752103: state=Running err=<nil>
	W1213 18:42:17.071504   44722 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 18:42:17.074803   44722 out.go:252] * Updating the running docker "functional-752103" container ...
	I1213 18:42:17.074833   44722 machine.go:94] provisionDockerMachine start ...
	I1213 18:42:17.074935   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.093274   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.093585   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.093591   44722 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 18:42:17.244524   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:42:17.244537   44722 ubuntu.go:182] provisioning hostname "functional-752103"
	I1213 18:42:17.244597   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.262380   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.262682   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.262690   44722 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-752103 && echo "functional-752103" | sudo tee /etc/hostname
	I1213 18:42:17.422688   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:42:17.422759   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.440827   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.441150   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.441163   44722 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-752103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-752103/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-752103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 18:42:17.593792   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 18:42:17.593821   44722 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 18:42:17.593841   44722 ubuntu.go:190] setting up certificates
	I1213 18:42:17.593861   44722 provision.go:84] configureAuth start
	I1213 18:42:17.593949   44722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:42:17.612231   44722 provision.go:143] copyHostCerts
	I1213 18:42:17.612297   44722 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 18:42:17.612304   44722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:42:17.612382   44722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 18:42:17.612525   44722 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 18:42:17.612528   44722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:42:17.612554   44722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 18:42:17.612619   44722 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 18:42:17.612622   44722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:42:17.612646   44722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 18:42:17.612700   44722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.functional-752103 san=[127.0.0.1 192.168.49.2 functional-752103 localhost minikube]
	I1213 18:42:17.675451   44722 provision.go:177] copyRemoteCerts
	I1213 18:42:17.675509   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 18:42:17.675551   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.693626   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:17.798419   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 18:42:17.816185   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 18:42:17.833700   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 18:42:17.853857   44722 provision.go:87] duration metric: took 259.975405ms to configureAuth
	I1213 18:42:17.853904   44722 ubuntu.go:206] setting minikube options for container-runtime
	I1213 18:42:17.854123   44722 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:42:17.854230   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.879965   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.880277   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.880288   44722 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 18:42:18.248633   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 18:42:18.248647   44722 machine.go:97] duration metric: took 1.173808025s to provisionDockerMachine
	I1213 18:42:18.248658   44722 start.go:293] postStartSetup for "functional-752103" (driver="docker")
	I1213 18:42:18.248669   44722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 18:42:18.248743   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 18:42:18.248792   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.266147   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.373221   44722 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 18:42:18.376713   44722 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 18:42:18.376729   44722 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 18:42:18.376740   44722 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 18:42:18.376791   44722 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 18:42:18.376867   44722 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 18:42:18.376940   44722 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> hosts in /etc/test/nested/copy/4637
	I1213 18:42:18.376981   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4637
	I1213 18:42:18.384622   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:42:18.402512   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts --> /etc/test/nested/copy/4637/hosts (40 bytes)
	I1213 18:42:18.419539   44722 start.go:296] duration metric: took 170.867557ms for postStartSetup
	I1213 18:42:18.419610   44722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 18:42:18.419664   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.436637   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.538189   44722 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 18:42:18.542827   44722 fix.go:56] duration metric: took 1.487930222s for fixHost
	I1213 18:42:18.542846   44722 start.go:83] releasing machines lock for "functional-752103", held for 1.487968187s
	I1213 18:42:18.542915   44722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:42:18.560389   44722 ssh_runner.go:195] Run: cat /version.json
	I1213 18:42:18.560434   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.560692   44722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 18:42:18.560748   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.583551   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.591018   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.701640   44722 ssh_runner.go:195] Run: systemctl --version
	I1213 18:42:18.800116   44722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 18:42:18.836359   44722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 18:42:18.840572   44722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 18:42:18.840646   44722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 18:42:18.848286   44722 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 18:42:18.848299   44722 start.go:496] detecting cgroup driver to use...
	I1213 18:42:18.848329   44722 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 18:42:18.848379   44722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 18:42:18.864054   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 18:42:18.878242   44722 docker.go:218] disabling cri-docker service (if available) ...
	I1213 18:42:18.878341   44722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 18:42:18.895499   44722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 18:42:18.910156   44722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 18:42:19.020039   44722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 18:42:19.142208   44722 docker.go:234] disabling docker service ...
	I1213 18:42:19.142263   44722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 18:42:19.158384   44722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 18:42:19.171631   44722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 18:42:19.293369   44722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 18:42:19.422037   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 18:42:19.435333   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 18:42:19.449327   44722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 18:42:19.449380   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.458689   44722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 18:42:19.458748   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.467502   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.476408   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.485815   44722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 18:42:19.494237   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.503335   44722 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.511920   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.520510   44722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 18:42:19.528006   44722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 18:42:19.535403   44722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:42:19.669317   44722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 18:42:19.868011   44722 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 18:42:19.868104   44722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 18:42:19.871850   44722 start.go:564] Will wait 60s for crictl version
	I1213 18:42:19.871906   44722 ssh_runner.go:195] Run: which crictl
	I1213 18:42:19.875387   44722 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 18:42:19.901618   44722 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 18:42:19.901703   44722 ssh_runner.go:195] Run: crio --version
	I1213 18:42:19.929436   44722 ssh_runner.go:195] Run: crio --version
	I1213 18:42:19.965392   44722 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 18:42:19.968348   44722 cli_runner.go:164] Run: docker network inspect functional-752103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:42:19.986389   44722 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 18:42:19.993243   44722 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 18:42:19.996095   44722 kubeadm.go:884] updating cluster {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 18:42:19.996213   44722 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:42:19.996291   44722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:42:20.057560   44722 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:42:20.057583   44722 crio.go:433] Images already preloaded, skipping extraction
	I1213 18:42:20.057640   44722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:42:20.089218   44722 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:42:20.089230   44722 cache_images.go:86] Images are preloaded, skipping loading
	I1213 18:42:20.089236   44722 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 18:42:20.089328   44722 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-752103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 18:42:20.089414   44722 ssh_runner.go:195] Run: crio config
	I1213 18:42:20.177167   44722 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 18:42:20.177187   44722 cni.go:84] Creating CNI manager for ""
	I1213 18:42:20.177196   44722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:42:20.177232   44722 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 18:42:20.177254   44722 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-752103 NodeName:functional-752103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 18:42:20.177418   44722 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-752103"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 18:42:20.177484   44722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 18:42:20.185578   44722 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 18:42:20.185638   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 18:42:20.192929   44722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 18:42:20.205146   44722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 18:42:20.217154   44722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1213 18:42:20.229717   44722 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 18:42:20.233247   44722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:42:20.353829   44722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:42:20.830403   44722 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103 for IP: 192.168.49.2
	I1213 18:42:20.830413   44722 certs.go:195] generating shared ca certs ...
	I1213 18:42:20.830433   44722 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:42:20.830617   44722 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 18:42:20.830683   44722 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 18:42:20.830690   44722 certs.go:257] generating profile certs ...
	I1213 18:42:20.830812   44722 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key
	I1213 18:42:20.830890   44722 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key.597c6026
	I1213 18:42:20.830949   44722 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key
	I1213 18:42:20.831080   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 18:42:20.831115   44722 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 18:42:20.831122   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 18:42:20.831151   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 18:42:20.831178   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 18:42:20.831204   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 18:42:20.831248   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:42:20.831981   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 18:42:20.856838   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 18:42:20.879274   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 18:42:20.903042   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 18:42:20.923306   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 18:42:20.942121   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 18:42:20.960173   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 18:42:20.977612   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 18:42:20.994747   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 18:42:21.015274   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 18:42:21.032852   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 18:42:21.049826   44722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 18:42:21.062502   44722 ssh_runner.go:195] Run: openssl version
	I1213 18:42:21.068589   44722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.075691   44722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 18:42:21.083152   44722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.086777   44722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.086838   44722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.127646   44722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 18:42:21.135282   44722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.142547   44722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 18:42:21.150436   44722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.154171   44722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.154226   44722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.195398   44722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 18:42:21.202918   44722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.210392   44722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 18:42:21.218018   44722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.221839   44722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.221907   44722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.262578   44722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 18:42:21.269897   44722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:42:21.273658   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 18:42:21.314538   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 18:42:21.355677   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 18:42:21.398275   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 18:42:21.439207   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 18:42:21.480256   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 18:42:21.526473   44722 kubeadm.go:401] StartCluster: {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:42:21.526551   44722 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:42:21.526617   44722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:42:21.557940   44722 cri.go:89] found id: ""
	I1213 18:42:21.558001   44722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 18:42:21.566021   44722 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 18:42:21.566031   44722 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 18:42:21.566081   44722 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 18:42:21.573603   44722 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.574106   44722 kubeconfig.go:125] found "functional-752103" server: "https://192.168.49.2:8441"
	I1213 18:42:21.575413   44722 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 18:42:21.585702   44722 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 18:27:45.810242505 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 18:42:20.222041116 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 18:42:21.585713   44722 kubeadm.go:1161] stopping kube-system containers ...
	I1213 18:42:21.585724   44722 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 18:42:21.585780   44722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:42:21.617768   44722 cri.go:89] found id: ""
	I1213 18:42:21.617827   44722 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 18:42:21.635403   44722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:42:21.643636   44722 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 18:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 18:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 18:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 13 18:31 /etc/kubernetes/scheduler.conf
	
	I1213 18:42:21.643708   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:42:21.651764   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:42:21.659161   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.659213   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:42:21.666555   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:42:21.674192   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.674247   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:42:21.681652   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:42:21.689753   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.689823   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:42:21.697372   44722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 18:42:21.705090   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:21.753330   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.314116   44722 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.560761972s)
	I1213 18:42:23.314176   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.523724   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.594421   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.642920   44722 api_server.go:52] waiting for apiserver process to appear ...
	I1213 18:42:23.642986   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:24.143977   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:24.643428   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:25.143550   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:25.643771   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:26.143193   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:26.643175   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:27.143974   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:27.643187   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:28.143912   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:28.643171   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:29.144072   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:29.644225   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:30.144075   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:30.643706   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:31.143172   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:31.643056   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:32.143628   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:32.643125   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:33.143827   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:33.643131   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:34.143247   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:34.643324   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:35.143141   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:35.643248   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:36.143915   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:36.644040   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:37.143715   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:37.643270   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:38.143997   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:38.643143   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:39.144023   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:39.643975   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:40.143050   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:40.643089   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:41.143722   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:41.643477   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:42.143838   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:42.643431   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:43.143175   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:43.643406   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:44.143895   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:44.643143   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:45.144217   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:45.644055   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:46.143137   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:46.644107   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:47.143996   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:47.643160   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:48.143815   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:48.643858   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:49.143166   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:49.644081   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:50.143765   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:50.643065   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:51.143582   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:51.643619   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:52.143220   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:52.643909   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:53.143832   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:53.643709   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:54.143426   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:54.643284   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:55.143992   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:55.643406   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:56.143943   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:56.643844   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:57.143618   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:57.643188   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:58.143857   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:58.643381   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:59.143183   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:59.643139   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:00.143730   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:00.643184   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:01.143789   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:01.643677   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:02.143883   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:02.643235   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:03.143175   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:03.643112   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:04.143893   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:04.643955   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:05.144057   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:05.643239   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:06.143229   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:06.643162   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:07.143132   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:07.643342   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:08.143161   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:08.643365   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:09.144023   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:09.643759   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:10.143925   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:10.644116   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:11.143184   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:11.643163   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:12.144081   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:12.643761   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:13.143171   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:13.643174   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:14.143070   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:14.643090   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:15.143762   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:15.643166   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:16.143069   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:16.644103   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:17.143993   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:17.643934   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:18.143216   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:18.643988   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:19.143982   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:19.643766   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:20.143191   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:20.644118   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:21.143094   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:21.644013   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:22.143973   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:22.643967   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:23.143991   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:23.643861   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:23.643960   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:23.674160   44722 cri.go:89] found id: ""
	I1213 18:43:23.674175   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.674182   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:23.674187   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:23.674245   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:23.700540   44722 cri.go:89] found id: ""
	I1213 18:43:23.700554   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.700561   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:23.700566   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:23.700624   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:23.726064   44722 cri.go:89] found id: ""
	I1213 18:43:23.726078   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.726084   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:23.726089   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:23.726148   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:23.752099   44722 cri.go:89] found id: ""
	I1213 18:43:23.752113   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.752120   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:23.752125   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:23.752190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:23.778105   44722 cri.go:89] found id: ""
	I1213 18:43:23.778120   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.778126   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:23.778131   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:23.778193   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:23.806032   44722 cri.go:89] found id: ""
	I1213 18:43:23.806047   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.806054   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:23.806059   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:23.806117   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:23.832635   44722 cri.go:89] found id: ""
	I1213 18:43:23.832649   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.832658   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:23.832667   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:23.832679   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:23.899244   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:23.899262   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:23.910777   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:23.910793   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:23.979546   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:23.970843   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.971479   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973158   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973794   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.975445   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:23.970843   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.971479   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973158   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973794   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.975445   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:23.979557   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:23.979567   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:24.055422   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:24.055441   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:26.587216   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:26.602744   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:26.602803   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:26.637528   44722 cri.go:89] found id: ""
	I1213 18:43:26.637543   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.637550   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:26.637555   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:26.637627   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:26.668738   44722 cri.go:89] found id: ""
	I1213 18:43:26.668752   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.668759   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:26.668764   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:26.668820   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:26.694813   44722 cri.go:89] found id: ""
	I1213 18:43:26.694827   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.694834   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:26.694839   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:26.694903   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:26.724152   44722 cri.go:89] found id: ""
	I1213 18:43:26.724165   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.724172   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:26.724177   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:26.724234   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:26.753666   44722 cri.go:89] found id: ""
	I1213 18:43:26.753680   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.753687   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:26.753692   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:26.753751   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:26.778797   44722 cri.go:89] found id: ""
	I1213 18:43:26.778810   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.778817   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:26.778822   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:26.778878   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:26.804095   44722 cri.go:89] found id: ""
	I1213 18:43:26.804108   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.804121   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:26.804128   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:26.804139   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:26.872610   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:26.863726   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.864249   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.865989   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.866485   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.868188   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:26.863726   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.864249   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.865989   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.866485   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.868188   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:26.872619   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:26.872629   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:26.941929   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:26.941948   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:26.969504   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:26.969520   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:27.036106   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:27.036126   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:29.549238   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:29.561563   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:29.561629   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:29.595212   44722 cri.go:89] found id: ""
	I1213 18:43:29.595227   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.595234   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:29.595239   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:29.595298   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:29.632368   44722 cri.go:89] found id: ""
	I1213 18:43:29.632382   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.632388   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:29.632393   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:29.632450   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:29.661185   44722 cri.go:89] found id: ""
	I1213 18:43:29.661199   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.661206   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:29.661211   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:29.661271   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:29.686961   44722 cri.go:89] found id: ""
	I1213 18:43:29.686974   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.686981   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:29.686986   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:29.687049   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:29.713104   44722 cri.go:89] found id: ""
	I1213 18:43:29.713118   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.713125   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:29.713130   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:29.713190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:29.738029   44722 cri.go:89] found id: ""
	I1213 18:43:29.738042   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.738049   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:29.738054   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:29.738116   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:29.763765   44722 cri.go:89] found id: ""
	I1213 18:43:29.763779   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.763785   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:29.763793   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:29.763803   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:29.829845   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:29.829864   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:29.841137   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:29.841153   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:29.910214   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:29.900921   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.902099   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903031   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903808   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.904683   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:29.900921   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.902099   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903031   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903808   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.904683   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:29.910238   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:29.910251   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:29.979995   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:29.980012   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:32.559824   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:32.569836   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:32.569896   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:32.598661   44722 cri.go:89] found id: ""
	I1213 18:43:32.598675   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.598682   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:32.598687   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:32.598741   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:32.629547   44722 cri.go:89] found id: ""
	I1213 18:43:32.629562   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.629568   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:32.629573   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:32.629650   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:32.654825   44722 cri.go:89] found id: ""
	I1213 18:43:32.654839   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.654846   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:32.654851   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:32.654908   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:32.680611   44722 cri.go:89] found id: ""
	I1213 18:43:32.680625   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.680632   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:32.680637   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:32.680695   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:32.706618   44722 cri.go:89] found id: ""
	I1213 18:43:32.706632   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.706639   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:32.706643   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:32.706702   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:32.730958   44722 cri.go:89] found id: ""
	I1213 18:43:32.730971   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.730978   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:32.730983   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:32.731052   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:32.759159   44722 cri.go:89] found id: ""
	I1213 18:43:32.759172   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.759179   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:32.759186   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:32.759196   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:32.824778   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:32.824797   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:32.835474   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:32.835491   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:32.898129   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:32.889603   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.890366   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.891862   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.892440   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.893974   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:32.889603   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.890366   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.891862   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.892440   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.893974   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:32.898149   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:32.898160   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:32.970010   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:32.970027   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:35.499162   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:35.510104   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:35.510168   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:35.536034   44722 cri.go:89] found id: ""
	I1213 18:43:35.536054   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.536061   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:35.536066   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:35.536125   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:35.560363   44722 cri.go:89] found id: ""
	I1213 18:43:35.560377   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.560384   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:35.560389   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:35.560447   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:35.595466   44722 cri.go:89] found id: ""
	I1213 18:43:35.595480   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.595486   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:35.595491   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:35.595546   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:35.626296   44722 cri.go:89] found id: ""
	I1213 18:43:35.626310   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.626316   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:35.626321   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:35.626376   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:35.653200   44722 cri.go:89] found id: ""
	I1213 18:43:35.653214   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.653221   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:35.653225   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:35.653322   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:35.678439   44722 cri.go:89] found id: ""
	I1213 18:43:35.678453   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.678459   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:35.678464   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:35.678525   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:35.703934   44722 cri.go:89] found id: ""
	I1213 18:43:35.703948   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.703954   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:35.703962   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:35.703972   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:35.769879   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:35.769897   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:35.781228   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:35.781245   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:35.848304   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:35.840026   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.840682   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842398   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842978   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.844548   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:35.840026   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.840682   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842398   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842978   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.844548   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:35.848316   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:35.848327   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:35.917611   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:35.917630   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:38.449407   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:38.459447   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:38.459504   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:38.485144   44722 cri.go:89] found id: ""
	I1213 18:43:38.485156   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.485163   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:38.485179   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:38.485241   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:38.513966   44722 cri.go:89] found id: ""
	I1213 18:43:38.513980   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.513987   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:38.513992   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:38.514050   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:38.540044   44722 cri.go:89] found id: ""
	I1213 18:43:38.540058   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.540065   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:38.540070   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:38.540128   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:38.570046   44722 cri.go:89] found id: ""
	I1213 18:43:38.570060   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.570067   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:38.570072   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:38.570131   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:38.602431   44722 cri.go:89] found id: ""
	I1213 18:43:38.602444   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.602451   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:38.602456   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:38.602513   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:38.631212   44722 cri.go:89] found id: ""
	I1213 18:43:38.631226   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.631233   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:38.631238   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:38.631295   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:38.658361   44722 cri.go:89] found id: ""
	I1213 18:43:38.658375   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.658383   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:38.658391   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:38.658401   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:38.728418   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:38.728436   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:38.739710   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:38.739726   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:38.807705   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:38.799135   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.799833   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.801634   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.802286   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.803965   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:38.799135   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.799833   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.801634   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.802286   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.803965   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:38.807715   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:38.807726   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:38.876773   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:38.876792   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:41.406031   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:41.416061   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:41.416122   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:41.441164   44722 cri.go:89] found id: ""
	I1213 18:43:41.441178   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.441184   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:41.441189   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:41.441246   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:41.468283   44722 cri.go:89] found id: ""
	I1213 18:43:41.468296   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.468303   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:41.468313   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:41.468369   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:41.492435   44722 cri.go:89] found id: ""
	I1213 18:43:41.492449   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.492456   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:41.492461   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:41.492525   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:41.517861   44722 cri.go:89] found id: ""
	I1213 18:43:41.517874   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.517881   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:41.517886   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:41.517946   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:41.542334   44722 cri.go:89] found id: ""
	I1213 18:43:41.542348   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.542354   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:41.542359   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:41.542420   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:41.566791   44722 cri.go:89] found id: ""
	I1213 18:43:41.566805   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.566812   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:41.566817   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:41.566873   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:41.605333   44722 cri.go:89] found id: ""
	I1213 18:43:41.605347   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.605353   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:41.605361   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:41.605372   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:41.685285   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:41.685307   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:41.719016   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:41.719031   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:41.784620   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:41.784638   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:41.797084   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:41.797099   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:41.863425   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:41.855920   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.856329   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.857901   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.858215   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.859646   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:41.855920   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.856329   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.857901   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.858215   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.859646   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:44.365147   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:44.375234   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:44.375292   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:44.404071   44722 cri.go:89] found id: ""
	I1213 18:43:44.404084   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.404091   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:44.404100   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:44.404159   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:44.429141   44722 cri.go:89] found id: ""
	I1213 18:43:44.429154   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.429161   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:44.429166   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:44.429235   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:44.453307   44722 cri.go:89] found id: ""
	I1213 18:43:44.453321   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.453328   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:44.453332   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:44.453409   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:44.478549   44722 cri.go:89] found id: ""
	I1213 18:43:44.478563   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.478570   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:44.478576   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:44.478636   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:44.504258   44722 cri.go:89] found id: ""
	I1213 18:43:44.504272   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.504278   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:44.504283   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:44.504340   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:44.528573   44722 cri.go:89] found id: ""
	I1213 18:43:44.528587   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.528594   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:44.528599   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:44.528655   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:44.553529   44722 cri.go:89] found id: ""
	I1213 18:43:44.553555   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.553562   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:44.553570   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:44.553581   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:44.591322   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:44.591339   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:44.676235   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:44.676264   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:44.687308   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:44.687333   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:44.749534   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:44.740808   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.741545   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743186   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743511   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.745093   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:44.740808   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.741545   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743186   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743511   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.745093   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:44.749567   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:44.749577   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:47.317951   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:47.328222   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:47.328296   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:47.357484   44722 cri.go:89] found id: ""
	I1213 18:43:47.357498   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.357515   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:47.357521   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:47.357593   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:47.388340   44722 cri.go:89] found id: ""
	I1213 18:43:47.388354   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.388362   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:47.388367   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:47.388431   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:47.412714   44722 cri.go:89] found id: ""
	I1213 18:43:47.412726   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.412733   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:47.412738   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:47.412794   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:47.437349   44722 cri.go:89] found id: ""
	I1213 18:43:47.437363   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.437369   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:47.437374   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:47.437432   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:47.461369   44722 cri.go:89] found id: ""
	I1213 18:43:47.461383   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.461390   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:47.461395   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:47.461454   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:47.494140   44722 cri.go:89] found id: ""
	I1213 18:43:47.494154   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.494161   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:47.494166   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:47.494223   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:47.519020   44722 cri.go:89] found id: ""
	I1213 18:43:47.519033   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.519040   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:47.519047   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:47.519060   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:47.587741   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:47.587760   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:47.623942   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:47.623957   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:47.696440   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:47.696459   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:47.707187   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:47.707203   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:47.769911   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:47.762074   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.762544   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764216   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764680   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.766131   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:47.762074   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.762544   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764216   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764680   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.766131   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:50.270188   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:50.280132   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:50.280190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:50.308672   44722 cri.go:89] found id: ""
	I1213 18:43:50.308686   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.308693   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:50.308699   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:50.308758   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:50.335996   44722 cri.go:89] found id: ""
	I1213 18:43:50.336010   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.336016   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:50.336021   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:50.336080   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:50.361733   44722 cri.go:89] found id: ""
	I1213 18:43:50.361746   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.361753   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:50.361758   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:50.361816   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:50.387122   44722 cri.go:89] found id: ""
	I1213 18:43:50.387137   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.387143   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:50.387148   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:50.387204   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:50.411746   44722 cri.go:89] found id: ""
	I1213 18:43:50.411760   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.411766   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:50.411771   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:50.411828   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:50.439079   44722 cri.go:89] found id: ""
	I1213 18:43:50.439093   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.439100   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:50.439104   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:50.439158   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:50.464264   44722 cri.go:89] found id: ""
	I1213 18:43:50.464278   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.464285   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:50.464293   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:50.464303   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:50.530938   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:50.530956   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:50.541880   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:50.541897   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:50.622277   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:50.613287   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.613702   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615208   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615836   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.616931   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:50.613287   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.613702   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615208   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615836   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.616931   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:50.622299   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:50.622311   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:50.693744   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:50.693765   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:53.224830   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:53.235168   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:53.235224   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:53.261284   44722 cri.go:89] found id: ""
	I1213 18:43:53.261297   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.261304   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:53.261309   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:53.261369   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:53.287104   44722 cri.go:89] found id: ""
	I1213 18:43:53.287118   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.287125   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:53.287136   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:53.287197   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:53.312612   44722 cri.go:89] found id: ""
	I1213 18:43:53.312626   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.312636   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:53.312641   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:53.312700   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:53.338548   44722 cri.go:89] found id: ""
	I1213 18:43:53.338562   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.338570   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:53.338575   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:53.338634   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:53.363849   44722 cri.go:89] found id: ""
	I1213 18:43:53.363862   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.363869   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:53.363874   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:53.363933   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:53.388677   44722 cri.go:89] found id: ""
	I1213 18:43:53.388693   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.388700   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:53.388707   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:53.388764   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:53.413384   44722 cri.go:89] found id: ""
	I1213 18:43:53.413398   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.413405   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:53.413412   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:53.413426   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:53.480895   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:53.480915   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:53.510174   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:53.510191   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:53.579252   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:53.579272   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:53.594356   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:53.594373   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:53.674807   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:53.667137   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.667568   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669097   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669497   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.670996   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:53.667137   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.667568   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669097   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669497   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.670996   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:56.175034   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:56.185031   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:56.185091   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:56.210252   44722 cri.go:89] found id: ""
	I1213 18:43:56.210266   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.210273   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:56.210289   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:56.210345   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:56.238190   44722 cri.go:89] found id: ""
	I1213 18:43:56.238204   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.238211   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:56.238216   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:56.238280   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:56.262334   44722 cri.go:89] found id: ""
	I1213 18:43:56.262361   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.262368   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:56.262374   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:56.262439   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:56.286668   44722 cri.go:89] found id: ""
	I1213 18:43:56.286681   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.286688   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:56.286693   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:56.286753   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:56.312401   44722 cri.go:89] found id: ""
	I1213 18:43:56.312426   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.312434   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:56.312439   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:56.312514   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:56.337419   44722 cri.go:89] found id: ""
	I1213 18:43:56.337433   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.337440   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:56.337446   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:56.337512   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:56.363240   44722 cri.go:89] found id: ""
	I1213 18:43:56.363252   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.363259   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:56.363274   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:56.363285   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:56.427558   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:56.427576   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:56.438948   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:56.438963   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:56.504100   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:56.496063   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.496558   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498109   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498537   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.500111   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:56.496063   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.496558   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498109   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498537   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.500111   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:56.504110   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:56.504121   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:56.576300   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:56.576319   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:59.120724   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:59.131483   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:59.131541   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:59.161664   44722 cri.go:89] found id: ""
	I1213 18:43:59.161677   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.161684   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:59.161689   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:59.161747   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:59.186541   44722 cri.go:89] found id: ""
	I1213 18:43:59.186554   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.186561   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:59.186566   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:59.186631   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:59.214613   44722 cri.go:89] found id: ""
	I1213 18:43:59.214627   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.214634   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:59.214639   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:59.214696   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:59.239790   44722 cri.go:89] found id: ""
	I1213 18:43:59.239803   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.239810   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:59.239815   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:59.239881   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:59.268177   44722 cri.go:89] found id: ""
	I1213 18:43:59.268191   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.268198   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:59.268203   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:59.268267   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:59.292660   44722 cri.go:89] found id: ""
	I1213 18:43:59.292674   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.292680   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:59.292687   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:59.292746   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:59.318413   44722 cri.go:89] found id: ""
	I1213 18:43:59.318428   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.318434   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:59.318442   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:59.318453   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:59.383565   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:59.383584   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:59.394753   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:59.394770   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:59.455757   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:59.448022   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.448571   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450046   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450376   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.451813   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:59.448022   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.448571   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450046   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450376   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.451813   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:59.455767   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:59.455777   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:59.527189   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:59.527209   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:02.063131   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:02.073460   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:02.073527   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:02.100600   44722 cri.go:89] found id: ""
	I1213 18:44:02.100614   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.100621   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:02.100626   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:02.100683   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:02.128484   44722 cri.go:89] found id: ""
	I1213 18:44:02.128498   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.128505   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:02.128510   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:02.128569   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:02.153979   44722 cri.go:89] found id: ""
	I1213 18:44:02.153994   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.154000   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:02.154005   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:02.154063   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:02.178950   44722 cri.go:89] found id: ""
	I1213 18:44:02.178964   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.178971   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:02.178975   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:02.179034   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:02.203560   44722 cri.go:89] found id: ""
	I1213 18:44:02.203573   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.203599   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:02.203604   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:02.203668   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:02.235040   44722 cri.go:89] found id: ""
	I1213 18:44:02.235054   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.235061   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:02.235066   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:02.235125   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:02.262563   44722 cri.go:89] found id: ""
	I1213 18:44:02.262578   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.262591   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:02.262598   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:02.262610   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:02.330429   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:02.330448   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:02.358932   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:02.358953   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:02.430089   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:02.430108   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:02.441162   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:02.441179   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:02.505804   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:02.496664   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.498082   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.499014   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500016   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500340   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:02.496664   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.498082   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.499014   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500016   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500340   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:05.006147   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:05.021965   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:05.022041   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:05.052122   44722 cri.go:89] found id: ""
	I1213 18:44:05.052138   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.052145   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:05.052152   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:05.052213   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:05.079304   44722 cri.go:89] found id: ""
	I1213 18:44:05.079318   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.079325   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:05.079330   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:05.079387   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:05.106489   44722 cri.go:89] found id: ""
	I1213 18:44:05.106502   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.106510   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:05.106515   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:05.106573   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:05.132104   44722 cri.go:89] found id: ""
	I1213 18:44:05.132118   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.132125   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:05.132130   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:05.132186   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:05.157774   44722 cri.go:89] found id: ""
	I1213 18:44:05.157789   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.157795   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:05.157800   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:05.157860   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:05.185228   44722 cri.go:89] found id: ""
	I1213 18:44:05.185241   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.185248   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:05.185254   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:05.185313   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:05.211945   44722 cri.go:89] found id: ""
	I1213 18:44:05.211959   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.211965   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:05.211973   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:05.211982   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:05.240000   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:05.240016   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:05.305313   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:05.305331   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:05.316614   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:05.316628   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:05.380462   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:05.372183   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.373062   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.374815   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.375112   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.376609   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:05.372183   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.373062   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.374815   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.375112   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.376609   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:05.380472   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:05.380482   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:07.948856   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:07.959788   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:07.959853   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:07.985640   44722 cri.go:89] found id: ""
	I1213 18:44:07.985655   44722 logs.go:282] 0 containers: []
	W1213 18:44:07.985662   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:07.985667   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:07.985735   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:08.017082   44722 cri.go:89] found id: ""
	I1213 18:44:08.017096   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.017105   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:08.017111   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:08.017176   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:08.046580   44722 cri.go:89] found id: ""
	I1213 18:44:08.046595   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.046603   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:08.046609   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:08.046678   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:08.073255   44722 cri.go:89] found id: ""
	I1213 18:44:08.073269   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.073275   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:08.073281   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:08.073342   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:08.101465   44722 cri.go:89] found id: ""
	I1213 18:44:08.101479   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.101486   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:08.101491   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:08.101560   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:08.126539   44722 cri.go:89] found id: ""
	I1213 18:44:08.126553   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.126559   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:08.126564   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:08.126624   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:08.151274   44722 cri.go:89] found id: ""
	I1213 18:44:08.151287   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.151294   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:08.151301   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:08.151311   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:08.221734   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:08.221760   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:08.234257   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:08.234274   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:08.303822   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:08.293709   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.294557   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.296695   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.297712   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.298655   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:08.293709   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.294557   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.296695   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.297712   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.298655   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:08.303834   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:08.303846   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:08.373320   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:08.373340   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:10.905140   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:10.916748   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:10.916820   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:10.944090   44722 cri.go:89] found id: ""
	I1213 18:44:10.944103   44722 logs.go:282] 0 containers: []
	W1213 18:44:10.944111   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:10.944115   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:10.944176   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:10.969154   44722 cri.go:89] found id: ""
	I1213 18:44:10.969168   44722 logs.go:282] 0 containers: []
	W1213 18:44:10.969174   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:10.969179   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:10.969237   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:10.994056   44722 cri.go:89] found id: ""
	I1213 18:44:10.994070   44722 logs.go:282] 0 containers: []
	W1213 18:44:10.994078   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:10.994082   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:10.994195   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:11.026335   44722 cri.go:89] found id: ""
	I1213 18:44:11.026349   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.026356   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:11.026362   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:11.026420   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:11.051618   44722 cri.go:89] found id: ""
	I1213 18:44:11.051632   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.051639   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:11.051644   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:11.051702   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:11.077796   44722 cri.go:89] found id: ""
	I1213 18:44:11.077811   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.077818   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:11.077824   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:11.077885   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:11.106061   44722 cri.go:89] found id: ""
	I1213 18:44:11.106082   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.106089   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:11.106096   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:11.106107   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:11.172632   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:11.164014   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.164956   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.166552   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.167108   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.168668   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:11.164014   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.164956   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.166552   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.167108   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.168668   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:11.172644   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:11.172654   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:11.241474   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:11.241492   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:11.270376   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:11.270394   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:11.335341   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:11.335360   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:13.846544   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:13.858154   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:13.858216   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:13.891714   44722 cri.go:89] found id: ""
	I1213 18:44:13.891728   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.891735   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:13.891740   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:13.891796   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:13.917089   44722 cri.go:89] found id: ""
	I1213 18:44:13.917103   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.917110   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:13.917115   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:13.917175   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:13.942618   44722 cri.go:89] found id: ""
	I1213 18:44:13.942637   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.942644   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:13.942654   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:13.942717   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:13.972824   44722 cri.go:89] found id: ""
	I1213 18:44:13.972837   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.972844   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:13.972850   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:13.972911   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:14.002454   44722 cri.go:89] found id: ""
	I1213 18:44:14.002478   44722 logs.go:282] 0 containers: []
	W1213 18:44:14.002507   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:14.002515   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:14.002584   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:14.029621   44722 cri.go:89] found id: ""
	I1213 18:44:14.029635   44722 logs.go:282] 0 containers: []
	W1213 18:44:14.029642   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:14.029647   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:14.029705   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:14.059348   44722 cri.go:89] found id: ""
	I1213 18:44:14.059361   44722 logs.go:282] 0 containers: []
	W1213 18:44:14.059368   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:14.059376   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:14.059386   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:14.089028   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:14.089044   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:14.154770   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:14.154787   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:14.165718   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:14.165733   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:14.229870   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:14.221572   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.222738   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.223785   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.224389   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.225986   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:14.221572   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.222738   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.223785   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.224389   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.225986   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:14.229881   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:14.229893   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:16.799799   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:16.810049   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:16.810109   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:16.841177   44722 cri.go:89] found id: ""
	I1213 18:44:16.841190   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.841197   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:16.841202   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:16.841258   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:16.867562   44722 cri.go:89] found id: ""
	I1213 18:44:16.867576   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.867583   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:16.867588   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:16.867647   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:16.894362   44722 cri.go:89] found id: ""
	I1213 18:44:16.894376   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.894383   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:16.894388   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:16.894449   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:16.922192   44722 cri.go:89] found id: ""
	I1213 18:44:16.922205   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.922212   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:16.922217   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:16.922274   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:16.947061   44722 cri.go:89] found id: ""
	I1213 18:44:16.947081   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.947088   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:16.947093   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:16.947151   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:16.973311   44722 cri.go:89] found id: ""
	I1213 18:44:16.973337   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.973345   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:16.973349   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:16.973409   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:17.002040   44722 cri.go:89] found id: ""
	I1213 18:44:17.002056   44722 logs.go:282] 0 containers: []
	W1213 18:44:17.002077   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:17.002086   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:17.002097   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:17.070995   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:17.062754   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.063352   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.064945   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.065473   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.066944   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:17.062754   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.063352   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.064945   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.065473   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.066944   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:17.071005   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:17.071015   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:17.142450   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:17.142467   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:17.174618   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:17.174636   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:17.245843   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:17.245861   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:19.758316   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:19.768061   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:19.768139   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:19.793023   44722 cri.go:89] found id: ""
	I1213 18:44:19.793037   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.793044   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:19.793049   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:19.793113   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:19.817629   44722 cri.go:89] found id: ""
	I1213 18:44:19.817643   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.817649   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:19.817654   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:19.817710   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:19.851145   44722 cri.go:89] found id: ""
	I1213 18:44:19.851159   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.851166   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:19.851170   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:19.851234   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:19.881252   44722 cri.go:89] found id: ""
	I1213 18:44:19.881265   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.881272   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:19.881277   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:19.881339   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:19.912741   44722 cri.go:89] found id: ""
	I1213 18:44:19.912754   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.912761   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:19.912766   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:19.912823   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:19.940085   44722 cri.go:89] found id: ""
	I1213 18:44:19.940098   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.940105   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:19.940110   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:19.940168   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:19.967047   44722 cri.go:89] found id: ""
	I1213 18:44:19.967061   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.967067   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:19.967081   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:19.967092   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:20.039016   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:20.039038   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:20.052809   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:20.052826   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:20.124568   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:20.115906   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.116315   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118019   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118655   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.120394   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:20.115906   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.116315   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118019   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118655   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.120394   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:20.124579   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:20.124595   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:20.192989   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:20.193017   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:22.722315   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:22.732622   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:22.732684   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:22.757530   44722 cri.go:89] found id: ""
	I1213 18:44:22.757544   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.757551   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:22.757556   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:22.757614   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:22.783868   44722 cri.go:89] found id: ""
	I1213 18:44:22.783891   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.783899   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:22.783906   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:22.783973   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:22.809581   44722 cri.go:89] found id: ""
	I1213 18:44:22.809602   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.809610   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:22.809615   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:22.809676   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:22.844651   44722 cri.go:89] found id: ""
	I1213 18:44:22.844665   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.844672   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:22.844677   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:22.844734   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:22.878207   44722 cri.go:89] found id: ""
	I1213 18:44:22.878221   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.878228   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:22.878233   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:22.878291   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:22.909295   44722 cri.go:89] found id: ""
	I1213 18:44:22.909309   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.909316   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:22.909322   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:22.909382   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:22.936178   44722 cri.go:89] found id: ""
	I1213 18:44:22.936191   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.936207   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:22.936215   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:22.936225   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:23.005296   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:22.992378   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.993185   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.994804   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.995396   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.997070   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:22.992378   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.993185   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.994804   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.995396   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.997070   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:23.005308   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:23.005319   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:23.079778   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:23.079797   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:23.109955   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:23.109982   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:23.176235   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:23.176252   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:25.689578   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:25.699921   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:25.699979   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:25.723877   44722 cri.go:89] found id: ""
	I1213 18:44:25.723891   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.723898   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:25.723902   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:25.723959   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:25.749128   44722 cri.go:89] found id: ""
	I1213 18:44:25.749142   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.749148   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:25.749153   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:25.749209   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:25.773791   44722 cri.go:89] found id: ""
	I1213 18:44:25.773811   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.773818   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:25.773823   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:25.773881   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:25.799904   44722 cri.go:89] found id: ""
	I1213 18:44:25.799917   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.799924   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:25.799929   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:25.799988   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:25.825978   44722 cri.go:89] found id: ""
	I1213 18:44:25.825992   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.825999   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:25.826004   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:25.826061   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:25.861824   44722 cri.go:89] found id: ""
	I1213 18:44:25.861838   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.861854   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:25.861860   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:25.861917   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:25.899196   44722 cri.go:89] found id: ""
	I1213 18:44:25.899209   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.899227   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:25.899235   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:25.899245   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:25.962230   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:25.953208   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.953997   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.955726   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.956332   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.957845   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:25.953208   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.953997   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.955726   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.956332   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.957845   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:25.962249   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:25.962260   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:26.029250   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:26.029269   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:26.058026   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:26.058045   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:26.126957   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:26.126975   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:28.638630   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:28.649197   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:28.649261   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:28.678140   44722 cri.go:89] found id: ""
	I1213 18:44:28.678155   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.678162   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:28.678166   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:28.678225   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:28.704240   44722 cri.go:89] found id: ""
	I1213 18:44:28.704253   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.704266   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:28.704271   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:28.704332   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:28.729471   44722 cri.go:89] found id: ""
	I1213 18:44:28.729484   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.729492   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:28.729499   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:28.729560   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:28.755384   44722 cri.go:89] found id: ""
	I1213 18:44:28.755397   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.755404   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:28.755419   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:28.755527   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:28.780729   44722 cri.go:89] found id: ""
	I1213 18:44:28.780742   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.780749   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:28.780754   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:28.780819   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:28.807414   44722 cri.go:89] found id: ""
	I1213 18:44:28.807428   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.807434   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:28.807439   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:28.807495   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:28.834478   44722 cri.go:89] found id: ""
	I1213 18:44:28.834492   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.834501   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:28.834509   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:28.834519   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:28.928552   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:28.919277   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.920155   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.921759   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.922310   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.923982   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:28.919277   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.920155   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.921759   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.922310   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.923982   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:28.928563   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:28.928572   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:28.998427   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:28.998448   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:29.028696   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:29.028713   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:29.094175   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:29.094194   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:31.605517   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:31.616232   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:31.616297   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:31.642711   44722 cri.go:89] found id: ""
	I1213 18:44:31.642725   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.642733   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:31.642738   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:31.642796   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:31.669186   44722 cri.go:89] found id: ""
	I1213 18:44:31.669201   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.669208   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:31.669212   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:31.669271   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:31.696754   44722 cri.go:89] found id: ""
	I1213 18:44:31.696768   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.696775   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:31.696780   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:31.696840   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:31.722602   44722 cri.go:89] found id: ""
	I1213 18:44:31.722616   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.722623   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:31.722628   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:31.722687   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:31.749280   44722 cri.go:89] found id: ""
	I1213 18:44:31.749294   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.749302   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:31.749307   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:31.749386   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:31.774452   44722 cri.go:89] found id: ""
	I1213 18:44:31.774466   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.774473   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:31.774478   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:31.774536   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:31.804250   44722 cri.go:89] found id: ""
	I1213 18:44:31.804264   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.804271   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:31.804278   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:31.804288   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:31.876057   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:31.876075   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:31.887830   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:31.887845   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:31.956181   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:31.947856   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.948537   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950179   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950675   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.952236   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:31.947856   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.948537   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950179   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950675   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.952236   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:31.956191   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:31.956202   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:32.025697   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:32.025716   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:34.558938   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:34.569025   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:34.569094   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:34.598446   44722 cri.go:89] found id: ""
	I1213 18:44:34.598459   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.598466   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:34.598470   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:34.598537   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:34.624087   44722 cri.go:89] found id: ""
	I1213 18:44:34.624105   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.624132   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:34.624137   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:34.624204   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:34.649175   44722 cri.go:89] found id: ""
	I1213 18:44:34.649189   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.649196   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:34.649201   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:34.649257   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:34.679802   44722 cri.go:89] found id: ""
	I1213 18:44:34.679816   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.679823   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:34.679828   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:34.679886   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:34.706842   44722 cri.go:89] found id: ""
	I1213 18:44:34.706856   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.706863   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:34.706868   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:34.706928   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:34.732851   44722 cri.go:89] found id: ""
	I1213 18:44:34.732878   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.732885   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:34.732906   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:34.732972   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:34.758491   44722 cri.go:89] found id: ""
	I1213 18:44:34.758504   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.758511   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:34.758520   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:34.758530   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:34.831184   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:34.831212   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:34.854446   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:34.854463   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:34.939932   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:34.930787   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.931550   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.933427   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.934090   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.935671   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:34.930787   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.931550   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.933427   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.934090   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.935671   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:34.939943   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:34.939953   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:35.008351   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:35.008373   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:37.538092   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:37.548372   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:37.548433   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:37.576028   44722 cri.go:89] found id: ""
	I1213 18:44:37.576042   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.576049   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:37.576054   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:37.576116   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:37.601240   44722 cri.go:89] found id: ""
	I1213 18:44:37.601264   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.601272   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:37.601277   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:37.601354   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:37.629739   44722 cri.go:89] found id: ""
	I1213 18:44:37.629752   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.629759   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:37.629764   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:37.629821   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:37.659547   44722 cri.go:89] found id: ""
	I1213 18:44:37.659560   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.659567   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:37.659582   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:37.659639   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:37.687820   44722 cri.go:89] found id: ""
	I1213 18:44:37.687833   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.687841   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:37.687846   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:37.687913   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:37.713950   44722 cri.go:89] found id: ""
	I1213 18:44:37.713964   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.713971   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:37.713976   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:37.714035   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:37.739532   44722 cri.go:89] found id: ""
	I1213 18:44:37.739557   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.739564   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:37.739572   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:37.739588   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:37.769815   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:37.769831   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:37.842765   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:37.842782   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:37.856389   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:37.856405   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:37.939080   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:37.930901   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.931464   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933144   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933671   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.935120   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:37.930901   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.931464   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933144   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933671   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.935120   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:37.939091   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:37.939101   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:40.510055   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:40.520003   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:40.520078   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:40.546166   44722 cri.go:89] found id: ""
	I1213 18:44:40.546181   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.546187   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:40.546193   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:40.546255   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:40.575492   44722 cri.go:89] found id: ""
	I1213 18:44:40.575506   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.575512   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:40.575517   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:40.575572   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:40.604021   44722 cri.go:89] found id: ""
	I1213 18:44:40.604034   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.604042   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:40.604047   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:40.604103   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:40.634511   44722 cri.go:89] found id: ""
	I1213 18:44:40.634525   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.634533   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:40.634537   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:40.634597   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:40.659233   44722 cri.go:89] found id: ""
	I1213 18:44:40.659255   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.659263   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:40.659268   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:40.659327   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:40.684289   44722 cri.go:89] found id: ""
	I1213 18:44:40.684314   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.684321   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:40.684326   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:40.684401   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:40.716236   44722 cri.go:89] found id: ""
	I1213 18:44:40.716250   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.716258   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:40.716265   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:40.716277   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:40.743946   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:40.743962   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:40.809441   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:40.809459   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:40.820434   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:40.820458   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:40.906406   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:40.898049   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.898672   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900282   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900803   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.902445   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:40.898049   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.898672   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900282   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900803   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.902445   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:40.906416   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:40.906426   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:43.474264   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:43.484255   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:43.484319   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:43.511963   44722 cri.go:89] found id: ""
	I1213 18:44:43.511977   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.511984   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:43.511989   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:43.512049   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:43.537311   44722 cri.go:89] found id: ""
	I1213 18:44:43.537332   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.537339   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:43.537343   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:43.537433   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:43.564197   44722 cri.go:89] found id: ""
	I1213 18:44:43.564211   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.564218   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:43.564222   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:43.564278   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:43.590140   44722 cri.go:89] found id: ""
	I1213 18:44:43.590154   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.590160   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:43.590166   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:43.590226   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:43.615885   44722 cri.go:89] found id: ""
	I1213 18:44:43.615900   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.615916   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:43.615921   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:43.615987   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:43.640848   44722 cri.go:89] found id: ""
	I1213 18:44:43.640862   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.640868   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:43.640873   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:43.640931   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:43.665363   44722 cri.go:89] found id: ""
	I1213 18:44:43.665377   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.665384   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:43.665391   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:43.665403   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:43.676205   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:43.676227   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:43.739640   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:43.731228   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.732007   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.733627   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.734165   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.735773   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:43.731228   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.732007   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.733627   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.734165   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.735773   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:43.739650   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:43.739661   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:43.807987   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:43.808008   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:43.851586   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:43.851601   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:46.426151   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:46.436240   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:46.436307   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:46.469030   44722 cri.go:89] found id: ""
	I1213 18:44:46.469044   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.469051   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:46.469056   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:46.469115   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:46.494555   44722 cri.go:89] found id: ""
	I1213 18:44:46.494568   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.494575   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:46.494580   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:46.494638   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:46.519291   44722 cri.go:89] found id: ""
	I1213 18:44:46.519305   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.519312   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:46.519316   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:46.519371   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:46.547775   44722 cri.go:89] found id: ""
	I1213 18:44:46.547790   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.547797   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:46.547802   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:46.547860   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:46.572951   44722 cri.go:89] found id: ""
	I1213 18:44:46.572965   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.572972   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:46.572978   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:46.573096   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:46.598953   44722 cri.go:89] found id: ""
	I1213 18:44:46.598967   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.598973   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:46.598979   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:46.599036   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:46.624426   44722 cri.go:89] found id: ""
	I1213 18:44:46.624440   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.624447   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:46.624454   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:46.624465   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:46.656272   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:46.656289   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:46.720505   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:46.720523   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:46.731422   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:46.731438   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:46.794954   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:46.786465   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.786956   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.788689   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.789067   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.790678   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:46.786465   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.786956   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.788689   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.789067   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.790678   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:46.794964   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:46.794974   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:49.368713   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:49.379093   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:49.379150   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:49.404638   44722 cri.go:89] found id: ""
	I1213 18:44:49.404652   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.404670   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:49.404676   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:49.404743   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:49.432165   44722 cri.go:89] found id: ""
	I1213 18:44:49.432185   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.432192   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:49.432203   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:49.432274   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:49.457580   44722 cri.go:89] found id: ""
	I1213 18:44:49.457594   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.457601   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:49.457605   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:49.457661   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:49.482518   44722 cri.go:89] found id: ""
	I1213 18:44:49.482531   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.482539   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:49.482544   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:49.482604   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:49.508421   44722 cri.go:89] found id: ""
	I1213 18:44:49.508435   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.508442   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:49.508447   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:49.508505   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:49.533273   44722 cri.go:89] found id: ""
	I1213 18:44:49.533286   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.533293   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:49.533298   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:49.533363   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:49.559407   44722 cri.go:89] found id: ""
	I1213 18:44:49.559421   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.559428   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:49.559436   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:49.559447   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:49.586863   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:49.586880   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:49.655301   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:49.655318   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:49.666641   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:49.666657   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:49.731547   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:49.723390   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.723925   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.725596   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.726135   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.727809   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:49.723390   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.723925   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.725596   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.726135   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.727809   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:49.731558   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:49.731569   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:52.302228   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:52.312354   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:52.312414   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:52.339337   44722 cri.go:89] found id: ""
	I1213 18:44:52.339351   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.339358   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:52.339363   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:52.339428   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:52.364722   44722 cri.go:89] found id: ""
	I1213 18:44:52.364736   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.364744   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:52.364748   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:52.364807   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:52.392869   44722 cri.go:89] found id: ""
	I1213 18:44:52.392883   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.392889   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:52.392894   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:52.392952   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:52.420101   44722 cri.go:89] found id: ""
	I1213 18:44:52.420115   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.420122   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:52.420126   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:52.420186   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:52.444708   44722 cri.go:89] found id: ""
	I1213 18:44:52.444721   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.444728   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:52.444733   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:52.444789   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:52.470027   44722 cri.go:89] found id: ""
	I1213 18:44:52.470041   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.470048   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:52.470053   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:52.470112   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:52.494761   44722 cri.go:89] found id: ""
	I1213 18:44:52.494775   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.494782   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:52.494789   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:52.494799   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:52.563435   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:52.563455   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:52.597529   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:52.597545   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:52.667889   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:52.667909   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:52.679020   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:52.679036   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:52.744141   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:52.735527   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.736263   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738012   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738630   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.740366   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:52.735527   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.736263   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738012   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738630   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.740366   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:55.245804   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:55.256306   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:55.256370   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:55.283000   44722 cri.go:89] found id: ""
	I1213 18:44:55.283013   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.283020   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:55.283025   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:55.283082   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:55.313671   44722 cri.go:89] found id: ""
	I1213 18:44:55.313684   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.313690   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:55.313695   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:55.313755   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:55.342037   44722 cri.go:89] found id: ""
	I1213 18:44:55.342051   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.342059   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:55.342064   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:55.342127   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:55.367525   44722 cri.go:89] found id: ""
	I1213 18:44:55.367538   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.367557   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:55.367562   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:55.367628   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:55.393243   44722 cri.go:89] found id: ""
	I1213 18:44:55.393257   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.393274   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:55.393280   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:55.393353   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:55.418513   44722 cri.go:89] found id: ""
	I1213 18:44:55.418527   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.418534   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:55.418539   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:55.418607   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:55.443468   44722 cri.go:89] found id: ""
	I1213 18:44:55.443483   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.443490   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:55.443500   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:55.443511   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:55.515427   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:55.507029   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.507943   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.509657   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.510148   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.511618   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:55.507029   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.507943   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.509657   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.510148   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.511618   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:55.515437   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:55.515448   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:55.586865   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:55.586885   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:55.616109   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:55.616125   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:55.685952   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:55.685972   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:58.198520   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:58.208638   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:58.208696   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:58.234480   44722 cri.go:89] found id: ""
	I1213 18:44:58.234494   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.234501   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:58.234506   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:58.234561   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:58.258261   44722 cri.go:89] found id: ""
	I1213 18:44:58.258274   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.258281   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:58.258287   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:58.258358   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:58.282891   44722 cri.go:89] found id: ""
	I1213 18:44:58.282904   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.282911   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:58.282916   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:58.282971   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:58.315746   44722 cri.go:89] found id: ""
	I1213 18:44:58.315760   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.315766   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:58.315771   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:58.315830   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:58.340701   44722 cri.go:89] found id: ""
	I1213 18:44:58.340714   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.340721   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:58.340726   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:58.340792   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:58.369974   44722 cri.go:89] found id: ""
	I1213 18:44:58.369987   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.369994   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:58.369998   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:58.370063   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:58.398903   44722 cri.go:89] found id: ""
	I1213 18:44:58.398917   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.398924   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:58.398932   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:58.398945   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:58.468133   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:58.468153   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:58.495769   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:58.495787   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:58.562032   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:58.562052   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:58.573192   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:58.573208   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:58.639058   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:58.631176   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.631711   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633329   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633843   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.635281   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:58.631176   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.631711   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633329   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633843   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.635281   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:01.139326   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:01.150701   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:01.150773   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:01.180572   44722 cri.go:89] found id: ""
	I1213 18:45:01.180597   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.180627   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:01.180632   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:01.180723   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:01.210001   44722 cri.go:89] found id: ""
	I1213 18:45:01.210027   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.210035   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:01.210040   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:01.210144   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:01.240388   44722 cri.go:89] found id: ""
	I1213 18:45:01.240411   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.240419   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:01.240425   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:01.240500   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:01.270469   44722 cri.go:89] found id: ""
	I1213 18:45:01.270485   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.270492   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:01.270498   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:01.270560   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:01.298917   44722 cri.go:89] found id: ""
	I1213 18:45:01.298932   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.298950   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:01.298956   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:01.299047   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:01.326174   44722 cri.go:89] found id: ""
	I1213 18:45:01.326188   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.326195   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:01.326200   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:01.326260   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:01.355316   44722 cri.go:89] found id: ""
	I1213 18:45:01.355331   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.355339   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:01.355348   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:01.355360   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:01.431176   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:01.431206   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:01.443676   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:01.443695   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:01.512045   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:01.503556   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.504288   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506017   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506375   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.508015   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:01.503556   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.504288   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506017   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506375   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.508015   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:01.512056   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:01.512066   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:01.581540   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:01.581560   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:04.113152   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:04.126133   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:04.126190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:04.157022   44722 cri.go:89] found id: ""
	I1213 18:45:04.157037   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.157044   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:04.157050   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:04.157111   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:04.184060   44722 cri.go:89] found id: ""
	I1213 18:45:04.184073   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.184080   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:04.184085   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:04.184144   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:04.210310   44722 cri.go:89] found id: ""
	I1213 18:45:04.210323   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.210330   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:04.210336   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:04.210398   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:04.236685   44722 cri.go:89] found id: ""
	I1213 18:45:04.236700   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.236707   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:04.236712   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:04.236771   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:04.265948   44722 cri.go:89] found id: ""
	I1213 18:45:04.265961   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.265968   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:04.265973   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:04.266029   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:04.291029   44722 cri.go:89] found id: ""
	I1213 18:45:04.291042   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.291049   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:04.291065   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:04.291122   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:04.316748   44722 cri.go:89] found id: ""
	I1213 18:45:04.316762   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.316768   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:04.316787   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:04.316798   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:04.380978   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:04.380996   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:04.392325   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:04.392342   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:04.459627   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:04.451449   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.452151   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.453706   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.454141   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.455629   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:04.451449   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.452151   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.453706   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.454141   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.455629   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:04.459637   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:04.459648   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:04.527567   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:04.527587   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:07.060097   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:07.070755   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:07.070814   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:07.098777   44722 cri.go:89] found id: ""
	I1213 18:45:07.098790   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.098797   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:07.098802   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:07.098863   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:07.126857   44722 cri.go:89] found id: ""
	I1213 18:45:07.126870   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.126877   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:07.126882   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:07.126938   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:07.154665   44722 cri.go:89] found id: ""
	I1213 18:45:07.154679   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.154686   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:07.154691   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:07.154751   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:07.183998   44722 cri.go:89] found id: ""
	I1213 18:45:07.184011   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.184018   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:07.184023   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:07.184079   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:07.209217   44722 cri.go:89] found id: ""
	I1213 18:45:07.209230   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.209238   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:07.209249   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:07.209309   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:07.238297   44722 cri.go:89] found id: ""
	I1213 18:45:07.238321   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.238328   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:07.238333   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:07.238392   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:07.268115   44722 cri.go:89] found id: ""
	I1213 18:45:07.268130   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.268136   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:07.268144   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:07.268156   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:07.337456   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:07.337475   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:07.365283   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:07.365299   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:07.433864   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:07.433882   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:07.445039   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:07.445055   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:07.509195   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:07.500621   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.500993   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.502681   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.503001   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.504545   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:07.500621   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.500993   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.502681   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.503001   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.504545   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:10.010342   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:10.026847   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:10.026923   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:10.055758   44722 cri.go:89] found id: ""
	I1213 18:45:10.055773   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.055781   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:10.055786   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:10.055847   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:10.084492   44722 cri.go:89] found id: ""
	I1213 18:45:10.084508   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.084515   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:10.084521   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:10.084579   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:10.124733   44722 cri.go:89] found id: ""
	I1213 18:45:10.124748   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.124756   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:10.124760   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:10.124823   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:10.167562   44722 cri.go:89] found id: ""
	I1213 18:45:10.167575   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.167583   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:10.167588   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:10.167647   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:10.196162   44722 cri.go:89] found id: ""
	I1213 18:45:10.196178   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.196185   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:10.196190   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:10.196251   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:10.222349   44722 cri.go:89] found id: ""
	I1213 18:45:10.222362   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.222370   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:10.222375   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:10.222433   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:10.252822   44722 cri.go:89] found id: ""
	I1213 18:45:10.252838   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.252848   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:10.252856   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:10.252867   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:10.318555   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:10.318574   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:10.330833   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:10.330848   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:10.403119   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:10.391784   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.392505   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394095   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394656   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.396739   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:10.391784   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.392505   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394095   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394656   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.396739   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:10.403129   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:10.403139   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:10.476776   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:10.476796   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:13.006030   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:13.016994   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:13.017078   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:13.047302   44722 cri.go:89] found id: ""
	I1213 18:45:13.047316   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.047322   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:13.047327   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:13.047390   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:13.072990   44722 cri.go:89] found id: ""
	I1213 18:45:13.073014   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.073024   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:13.073029   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:13.073086   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:13.104144   44722 cri.go:89] found id: ""
	I1213 18:45:13.104158   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.104165   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:13.104169   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:13.104233   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:13.133122   44722 cri.go:89] found id: ""
	I1213 18:45:13.133135   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.133141   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:13.133147   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:13.133228   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:13.165373   44722 cri.go:89] found id: ""
	I1213 18:45:13.165399   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.165406   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:13.165411   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:13.165473   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:13.191991   44722 cri.go:89] found id: ""
	I1213 18:45:13.192004   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.192012   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:13.192017   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:13.192082   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:13.217774   44722 cri.go:89] found id: ""
	I1213 18:45:13.217788   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.217795   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:13.217802   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:13.217813   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:13.284517   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:13.275477   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.276368   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278192   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278786   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.280431   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:13.275477   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.276368   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278192   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278786   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.280431   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:13.284527   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:13.284538   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:13.353730   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:13.353749   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:13.384210   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:13.384225   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:13.452832   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:13.452849   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:15.964206   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:15.976388   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:15.976453   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:16.006122   44722 cri.go:89] found id: ""
	I1213 18:45:16.006136   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.006143   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:16.006149   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:16.006211   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:16.031686   44722 cri.go:89] found id: ""
	I1213 18:45:16.031700   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.031707   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:16.031712   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:16.031768   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:16.057702   44722 cri.go:89] found id: ""
	I1213 18:45:16.057715   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.057722   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:16.057728   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:16.057783   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:16.090888   44722 cri.go:89] found id: ""
	I1213 18:45:16.090913   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.090921   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:16.090927   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:16.090997   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:16.128051   44722 cri.go:89] found id: ""
	I1213 18:45:16.128075   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.128083   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:16.128089   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:16.128160   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:16.157962   44722 cri.go:89] found id: ""
	I1213 18:45:16.157986   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.157993   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:16.157999   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:16.158057   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:16.184049   44722 cri.go:89] found id: ""
	I1213 18:45:16.184063   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.184070   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:16.184077   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:16.184088   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:16.250129   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:16.250149   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:16.261107   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:16.261125   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:16.330408   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:16.321894   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.322673   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324350   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324661   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.326266   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:16.321894   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.322673   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324350   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324661   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.326266   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:16.330418   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:16.330428   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:16.398576   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:16.398594   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:18.928496   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:18.938797   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:18.938873   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:18.964909   44722 cri.go:89] found id: ""
	I1213 18:45:18.964924   44722 logs.go:282] 0 containers: []
	W1213 18:45:18.964932   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:18.964939   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:18.964999   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:18.991414   44722 cri.go:89] found id: ""
	I1213 18:45:18.991428   44722 logs.go:282] 0 containers: []
	W1213 18:45:18.991446   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:18.991451   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:18.991508   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:19.021961   44722 cri.go:89] found id: ""
	I1213 18:45:19.021976   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.021983   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:19.021988   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:19.022055   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:19.046931   44722 cri.go:89] found id: ""
	I1213 18:45:19.046945   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.046952   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:19.046957   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:19.047013   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:19.072683   44722 cri.go:89] found id: ""
	I1213 18:45:19.072696   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.072703   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:19.072708   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:19.072778   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:19.100627   44722 cri.go:89] found id: ""
	I1213 18:45:19.100643   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.100651   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:19.100656   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:19.100720   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:19.130142   44722 cri.go:89] found id: ""
	I1213 18:45:19.130157   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.130163   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:19.130171   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:19.130182   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:19.197474   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:19.197494   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:19.208889   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:19.208908   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:19.274541   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:19.265647   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.266238   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.267928   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.268736   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.270556   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:19.265647   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.266238   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.267928   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.268736   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.270556   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:19.274551   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:19.274561   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:19.342919   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:19.342938   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:21.872871   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:21.883492   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:21.883550   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:21.910011   44722 cri.go:89] found id: ""
	I1213 18:45:21.910025   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.910032   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:21.910037   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:21.910094   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:21.935440   44722 cri.go:89] found id: ""
	I1213 18:45:21.935454   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.935461   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:21.935476   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:21.935535   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:21.970166   44722 cri.go:89] found id: ""
	I1213 18:45:21.970181   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.970188   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:21.970193   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:21.970254   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:21.996521   44722 cri.go:89] found id: ""
	I1213 18:45:21.996544   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.996552   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:21.996557   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:21.996625   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:22.026015   44722 cri.go:89] found id: ""
	I1213 18:45:22.026030   44722 logs.go:282] 0 containers: []
	W1213 18:45:22.026048   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:22.026054   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:22.026136   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:22.052512   44722 cri.go:89] found id: ""
	I1213 18:45:22.052526   44722 logs.go:282] 0 containers: []
	W1213 18:45:22.052533   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:22.052547   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:22.052634   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:22.087211   44722 cri.go:89] found id: ""
	I1213 18:45:22.087242   44722 logs.go:282] 0 containers: []
	W1213 18:45:22.087249   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:22.087258   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:22.087268   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:22.161238   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:22.161256   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:22.172311   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:22.172327   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:22.235337   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:22.226748   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.227404   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229399   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229780   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.231333   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:22.226748   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.227404   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229399   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229780   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.231333   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:22.235349   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:22.235360   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:22.304771   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:22.304790   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:24.834025   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:24.844561   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:24.844623   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:24.869497   44722 cri.go:89] found id: ""
	I1213 18:45:24.869512   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.869519   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:24.869524   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:24.869582   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:24.899663   44722 cri.go:89] found id: ""
	I1213 18:45:24.899677   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.899685   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:24.899690   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:24.899750   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:24.929664   44722 cri.go:89] found id: ""
	I1213 18:45:24.929678   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.929685   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:24.929689   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:24.929748   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:24.954943   44722 cri.go:89] found id: ""
	I1213 18:45:24.954957   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.954964   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:24.954969   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:24.955024   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:24.981964   44722 cri.go:89] found id: ""
	I1213 18:45:24.981978   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.981985   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:24.981991   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:24.982048   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:25.024491   44722 cri.go:89] found id: ""
	I1213 18:45:25.024507   44722 logs.go:282] 0 containers: []
	W1213 18:45:25.024514   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:25.024519   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:25.024587   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:25.059717   44722 cri.go:89] found id: ""
	I1213 18:45:25.059732   44722 logs.go:282] 0 containers: []
	W1213 18:45:25.059740   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:25.059747   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:25.059758   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:25.137684   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:25.137709   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:25.152450   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:25.152466   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:25.224073   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:25.215282   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.215897   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.217852   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.218715   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.219908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:25.215282   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.215897   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.217852   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.218715   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.219908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:25.224083   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:25.224095   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:25.293145   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:25.293164   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:27.825368   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:27.835872   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:27.835932   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:27.861658   44722 cri.go:89] found id: ""
	I1213 18:45:27.861672   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.861679   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:27.861684   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:27.861742   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:27.886615   44722 cri.go:89] found id: ""
	I1213 18:45:27.886629   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.886636   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:27.886641   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:27.886697   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:27.915655   44722 cri.go:89] found id: ""
	I1213 18:45:27.915669   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.915676   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:27.915681   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:27.915743   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:27.940463   44722 cri.go:89] found id: ""
	I1213 18:45:27.940477   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.940484   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:27.940489   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:27.940546   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:27.970042   44722 cri.go:89] found id: ""
	I1213 18:45:27.970056   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.970063   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:27.970068   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:27.970125   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:27.996687   44722 cri.go:89] found id: ""
	I1213 18:45:27.996702   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.996708   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:27.996714   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:27.996773   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:28.025848   44722 cri.go:89] found id: ""
	I1213 18:45:28.025861   44722 logs.go:282] 0 containers: []
	W1213 18:45:28.025868   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:28.025876   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:28.025894   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:28.104265   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:28.104292   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:28.116838   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:28.116855   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:28.189318   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:28.180911   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.181676   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.183358   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.184009   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.185382   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:28.180911   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.181676   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.183358   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.184009   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.185382   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:28.189329   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:28.189340   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:28.257409   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:28.257428   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:30.789289   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:30.799688   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:30.799748   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:30.828658   44722 cri.go:89] found id: ""
	I1213 18:45:30.828672   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.828680   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:30.828688   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:30.828748   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:30.854242   44722 cri.go:89] found id: ""
	I1213 18:45:30.854256   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.854263   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:30.854268   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:30.854325   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:30.879211   44722 cri.go:89] found id: ""
	I1213 18:45:30.879225   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.879235   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:30.879241   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:30.879298   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:30.908380   44722 cri.go:89] found id: ""
	I1213 18:45:30.908394   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.908401   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:30.908406   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:30.908462   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:30.934004   44722 cri.go:89] found id: ""
	I1213 18:45:30.934023   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.934030   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:30.934035   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:30.934094   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:30.959088   44722 cri.go:89] found id: ""
	I1213 18:45:30.959101   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.959108   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:30.959113   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:30.959172   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:30.987128   44722 cri.go:89] found id: ""
	I1213 18:45:30.987142   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.987149   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:30.987156   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:30.987167   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:30.999233   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:30.999253   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:31.070686   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:31.062512   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.063387   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.064956   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.065476   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.066859   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:31.062512   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.063387   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.064956   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.065476   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.066859   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:31.070697   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:31.070708   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:31.149373   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:31.149393   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:31.182467   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:31.182484   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:33.754920   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:33.764984   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:33.765061   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:33.789610   44722 cri.go:89] found id: ""
	I1213 18:45:33.789624   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.789630   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:33.789635   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:33.789694   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:33.814723   44722 cri.go:89] found id: ""
	I1213 18:45:33.814738   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.814744   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:33.814749   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:33.814811   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:33.841835   44722 cri.go:89] found id: ""
	I1213 18:45:33.841848   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.841855   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:33.841860   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:33.841917   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:33.875847   44722 cri.go:89] found id: ""
	I1213 18:45:33.875871   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.875878   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:33.875885   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:33.875953   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:33.903037   44722 cri.go:89] found id: ""
	I1213 18:45:33.903050   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.903057   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:33.903062   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:33.903135   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:33.934423   44722 cri.go:89] found id: ""
	I1213 18:45:33.934437   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.934444   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:33.934449   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:33.934522   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:33.959437   44722 cri.go:89] found id: ""
	I1213 18:45:33.959450   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.959458   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:33.959465   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:33.959475   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:34.024568   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:34.024587   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:34.036558   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:34.036583   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:34.113960   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:34.105595   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.106445   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.107646   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.108191   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.109855   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:34.105595   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.106445   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.107646   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.108191   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.109855   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:34.113970   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:34.113988   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:34.186879   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:34.186900   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:36.717771   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:36.731405   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:36.731462   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:36.758511   44722 cri.go:89] found id: ""
	I1213 18:45:36.758525   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.758532   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:36.758537   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:36.758595   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:36.784601   44722 cri.go:89] found id: ""
	I1213 18:45:36.784614   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.784621   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:36.784626   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:36.784683   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:36.813889   44722 cri.go:89] found id: ""
	I1213 18:45:36.813903   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.813910   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:36.813915   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:36.813974   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:36.840673   44722 cri.go:89] found id: ""
	I1213 18:45:36.840687   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.840695   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:36.840701   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:36.840758   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:36.866658   44722 cri.go:89] found id: ""
	I1213 18:45:36.866673   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.866679   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:36.866684   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:36.866761   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:36.893289   44722 cri.go:89] found id: ""
	I1213 18:45:36.893303   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.893311   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:36.893316   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:36.893377   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:36.920158   44722 cri.go:89] found id: ""
	I1213 18:45:36.920171   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.920178   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:36.920186   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:36.920196   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:36.987002   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:36.987021   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:36.999105   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:36.999128   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:37.072378   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:37.063848   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.064510   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066038   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066549   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.067999   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:37.063848   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.064510   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066038   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066549   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.067999   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:37.072390   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:37.072401   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:37.145027   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:37.145047   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:39.682857   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:39.693055   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:39.693114   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:39.717750   44722 cri.go:89] found id: ""
	I1213 18:45:39.717763   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.717771   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:39.717776   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:39.717831   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:39.748452   44722 cri.go:89] found id: ""
	I1213 18:45:39.748466   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.748473   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:39.748478   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:39.748535   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:39.775686   44722 cri.go:89] found id: ""
	I1213 18:45:39.775700   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.775706   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:39.775712   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:39.775773   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:39.801049   44722 cri.go:89] found id: ""
	I1213 18:45:39.801063   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.801070   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:39.801075   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:39.801132   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:39.829545   44722 cri.go:89] found id: ""
	I1213 18:45:39.829559   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.829566   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:39.829571   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:39.829627   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:39.855870   44722 cri.go:89] found id: ""
	I1213 18:45:39.855883   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.855890   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:39.855895   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:39.855951   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:39.880432   44722 cri.go:89] found id: ""
	I1213 18:45:39.880446   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.880452   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:39.880460   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:39.880471   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:39.944602   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:39.936636   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.937539   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939109   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939488   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.940927   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:39.936636   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.937539   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939109   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939488   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.940927   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:39.944613   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:39.944623   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:40.014162   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:40.014186   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:40.052762   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:40.052780   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:40.123344   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:40.123364   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:42.639745   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:42.650139   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:42.650196   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:42.674810   44722 cri.go:89] found id: ""
	I1213 18:45:42.674824   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.674831   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:42.674836   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:42.674896   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:42.705498   44722 cri.go:89] found id: ""
	I1213 18:45:42.705512   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.705519   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:42.705524   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:42.705590   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:42.731558   44722 cri.go:89] found id: ""
	I1213 18:45:42.731572   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.731586   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:42.731591   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:42.731650   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:42.758070   44722 cri.go:89] found id: ""
	I1213 18:45:42.758084   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.758098   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:42.758103   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:42.758163   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:42.784043   44722 cri.go:89] found id: ""
	I1213 18:45:42.784057   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.784065   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:42.784069   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:42.784130   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:42.810580   44722 cri.go:89] found id: ""
	I1213 18:45:42.810594   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.810602   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:42.810607   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:42.810667   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:42.837217   44722 cri.go:89] found id: ""
	I1213 18:45:42.837230   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.837237   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:42.837244   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:42.837255   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:42.869269   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:42.869289   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:42.937246   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:42.937265   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:42.948535   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:42.948551   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:43.014525   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:43.006257   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.006741   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008386   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008729   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.010279   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:43.006257   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.006741   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008386   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008729   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.010279   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:43.014550   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:43.014561   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:45.585650   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:45.596016   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:45.596081   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:45.621732   44722 cri.go:89] found id: ""
	I1213 18:45:45.621746   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.621753   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:45.621758   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:45.621828   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:45.647999   44722 cri.go:89] found id: ""
	I1213 18:45:45.648013   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.648020   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:45.648025   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:45.648084   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:45.672656   44722 cri.go:89] found id: ""
	I1213 18:45:45.672669   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.672676   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:45.672681   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:45.672737   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:45.697633   44722 cri.go:89] found id: ""
	I1213 18:45:45.697648   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.697655   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:45.697660   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:45.697725   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:45.722938   44722 cri.go:89] found id: ""
	I1213 18:45:45.722957   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.722964   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:45.722969   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:45.723027   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:45.753044   44722 cri.go:89] found id: ""
	I1213 18:45:45.753057   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.753064   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:45.753069   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:45.753139   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:45.777945   44722 cri.go:89] found id: ""
	I1213 18:45:45.777959   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.777966   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:45.777974   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:45.777984   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:45.788618   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:45.788634   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:45.856342   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:45.847135   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.847845   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.849739   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.850385   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.851966   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:45.847135   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.847845   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.849739   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.850385   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.851966   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:45.856353   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:45.856363   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:45.925928   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:45.925948   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:45.955270   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:45.955286   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:48.526489   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:48.536804   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:48.536878   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:48.564096   44722 cri.go:89] found id: ""
	I1213 18:45:48.564110   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.564116   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:48.564121   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:48.564180   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:48.589084   44722 cri.go:89] found id: ""
	I1213 18:45:48.589098   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.589105   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:48.589117   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:48.589174   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:48.614957   44722 cri.go:89] found id: ""
	I1213 18:45:48.614971   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.614978   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:48.614989   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:48.615045   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:48.639705   44722 cri.go:89] found id: ""
	I1213 18:45:48.639719   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.639725   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:48.639730   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:48.639789   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:48.665151   44722 cri.go:89] found id: ""
	I1213 18:45:48.665165   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.665171   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:48.665176   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:48.665237   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:48.691765   44722 cri.go:89] found id: ""
	I1213 18:45:48.691779   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.691786   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:48.691791   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:48.691846   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:48.718076   44722 cri.go:89] found id: ""
	I1213 18:45:48.718089   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.718096   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:48.718104   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:48.718115   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:48.729150   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:48.729166   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:48.795759   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:48.787631   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.788312   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790025   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790514   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.791993   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:48.787631   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.788312   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790025   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790514   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.791993   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:48.795769   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:48.795780   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:48.865101   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:48.865123   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:48.893317   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:48.893332   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:51.461504   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:51.471540   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:51.471603   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:51.496535   44722 cri.go:89] found id: ""
	I1213 18:45:51.496549   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.496556   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:51.496561   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:51.496620   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:51.523516   44722 cri.go:89] found id: ""
	I1213 18:45:51.523530   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.523537   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:51.523542   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:51.523601   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:51.548779   44722 cri.go:89] found id: ""
	I1213 18:45:51.548792   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.548799   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:51.548804   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:51.548862   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:51.574426   44722 cri.go:89] found id: ""
	I1213 18:45:51.574439   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.574446   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:51.574451   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:51.574508   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:51.601095   44722 cri.go:89] found id: ""
	I1213 18:45:51.601116   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.601123   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:51.601128   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:51.601185   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:51.630300   44722 cri.go:89] found id: ""
	I1213 18:45:51.630314   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.630321   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:51.630326   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:51.630388   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:51.658180   44722 cri.go:89] found id: ""
	I1213 18:45:51.658194   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.658200   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:51.658208   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:51.658218   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:51.727599   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:51.727617   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:51.740526   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:51.740543   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:51.824581   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:51.815003   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.815673   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.817551   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.818376   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.820029   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:51.815003   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.815673   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.817551   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.818376   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.820029   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:51.824598   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:51.824608   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:51.895130   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:51.895149   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:54.423725   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:54.434109   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:54.434167   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:54.461075   44722 cri.go:89] found id: ""
	I1213 18:45:54.461096   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.461104   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:54.461109   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:54.461169   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:54.486465   44722 cri.go:89] found id: ""
	I1213 18:45:54.486479   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.486485   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:54.486490   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:54.486545   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:54.512518   44722 cri.go:89] found id: ""
	I1213 18:45:54.512532   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.512539   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:54.512556   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:54.512613   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:54.539809   44722 cri.go:89] found id: ""
	I1213 18:45:54.539823   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.539830   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:54.539835   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:54.539897   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:54.570146   44722 cri.go:89] found id: ""
	I1213 18:45:54.570159   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.570166   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:54.570170   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:54.570224   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:54.596027   44722 cri.go:89] found id: ""
	I1213 18:45:54.596041   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.596047   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:54.596052   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:54.596113   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:54.623337   44722 cri.go:89] found id: ""
	I1213 18:45:54.623351   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.623358   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:54.623367   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:54.623382   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:54.654287   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:54.654305   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:54.720405   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:54.720426   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:54.731640   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:54.731656   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:54.800062   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:54.792084   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.792588   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794071   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794411   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.795882   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:54.792084   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.792588   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794071   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794411   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.795882   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:54.800085   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:54.800095   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:57.370530   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:57.381975   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:57.382044   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:57.410748   44722 cri.go:89] found id: ""
	I1213 18:45:57.410761   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.410768   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:57.410773   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:57.410834   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:57.437110   44722 cri.go:89] found id: ""
	I1213 18:45:57.437123   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.437130   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:57.437135   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:57.437196   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:57.463356   44722 cri.go:89] found id: ""
	I1213 18:45:57.463370   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.463377   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:57.463381   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:57.463436   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:57.488350   44722 cri.go:89] found id: ""
	I1213 18:45:57.488364   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.488381   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:57.488387   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:57.488442   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:57.513926   44722 cri.go:89] found id: ""
	I1213 18:45:57.513939   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.513951   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:57.513956   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:57.514013   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:57.539641   44722 cri.go:89] found id: ""
	I1213 18:45:57.539655   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.539661   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:57.539666   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:57.539722   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:57.565672   44722 cri.go:89] found id: ""
	I1213 18:45:57.565686   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.565693   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:57.565700   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:57.565710   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:57.637461   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:57.637486   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:57.648402   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:57.648418   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:57.716551   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:57.708424   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.708971   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.710676   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.711086   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.712583   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:57.708424   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.708971   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.710676   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.711086   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.712583   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:57.716567   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:57.716579   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:57.785661   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:57.785681   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:00.318382   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:00.335223   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:00.335290   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:00.415052   44722 cri.go:89] found id: ""
	I1213 18:46:00.415068   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.415075   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:00.415080   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:00.415144   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:00.448025   44722 cri.go:89] found id: ""
	I1213 18:46:00.448039   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.448047   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:00.448052   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:00.448120   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:00.478830   44722 cri.go:89] found id: ""
	I1213 18:46:00.478844   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.478851   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:00.478856   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:00.478915   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:00.510923   44722 cri.go:89] found id: ""
	I1213 18:46:00.510943   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.510951   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:00.510956   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:00.511018   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:00.538053   44722 cri.go:89] found id: ""
	I1213 18:46:00.538068   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.538075   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:00.538080   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:00.538139   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:00.563080   44722 cri.go:89] found id: ""
	I1213 18:46:00.563094   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.563101   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:00.563107   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:00.563162   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:00.588696   44722 cri.go:89] found id: ""
	I1213 18:46:00.588710   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.588716   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:00.588724   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:00.588734   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:00.655165   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:00.655185   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:00.667201   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:00.667217   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:00.732035   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:00.723385   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.723987   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.725839   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.726393   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.728162   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:00.723385   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.723987   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.725839   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.726393   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.728162   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:00.732045   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:00.732055   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:00.803574   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:00.803592   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:03.335736   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:03.347198   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:03.347266   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:03.376587   44722 cri.go:89] found id: ""
	I1213 18:46:03.376600   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.376625   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:03.376630   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:03.376698   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:03.407284   44722 cri.go:89] found id: ""
	I1213 18:46:03.407298   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.407305   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:03.407310   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:03.407379   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:03.432194   44722 cri.go:89] found id: ""
	I1213 18:46:03.432219   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.432226   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:03.432231   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:03.432297   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:03.461490   44722 cri.go:89] found id: ""
	I1213 18:46:03.461504   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.461520   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:03.461528   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:03.461586   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:03.486500   44722 cri.go:89] found id: ""
	I1213 18:46:03.486514   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.486521   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:03.486526   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:03.486580   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:03.516064   44722 cri.go:89] found id: ""
	I1213 18:46:03.516079   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.516095   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:03.516101   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:03.516173   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:03.543241   44722 cri.go:89] found id: ""
	I1213 18:46:03.543261   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.543269   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:03.543277   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:03.543288   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:03.614698   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:03.606014   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.606848   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.608572   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.609328   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.610814   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:03.606014   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.606848   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.608572   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.609328   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.610814   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:03.614708   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:03.614719   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:03.683610   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:03.683629   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:03.714101   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:03.714118   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:03.783821   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:03.783841   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:06.296661   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:06.307402   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:06.307473   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:06.342139   44722 cri.go:89] found id: ""
	I1213 18:46:06.342152   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.342159   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:06.342164   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:06.342223   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:06.376710   44722 cri.go:89] found id: ""
	I1213 18:46:06.376724   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.376730   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:06.376735   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:06.376793   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:06.412732   44722 cri.go:89] found id: ""
	I1213 18:46:06.412746   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.412753   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:06.412758   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:06.412814   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:06.445341   44722 cri.go:89] found id: ""
	I1213 18:46:06.445354   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.445360   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:06.445365   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:06.445423   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:06.470587   44722 cri.go:89] found id: ""
	I1213 18:46:06.470601   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.470608   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:06.470613   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:06.470667   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:06.495331   44722 cri.go:89] found id: ""
	I1213 18:46:06.495347   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.495354   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:06.495360   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:06.495420   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:06.521489   44722 cri.go:89] found id: ""
	I1213 18:46:06.521503   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.521510   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:06.521517   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:06.521531   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:06.552192   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:06.552209   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:06.618284   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:06.618302   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:06.630541   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:06.630558   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:06.702858   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:06.695039   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.695585   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697148   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697474   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.698996   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:06.695039   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.695585   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697148   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697474   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.698996   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:06.702868   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:06.702881   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:09.275499   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:09.285598   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:09.285657   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:09.313861   44722 cri.go:89] found id: ""
	I1213 18:46:09.313885   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.313893   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:09.313898   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:09.313956   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:09.346645   44722 cri.go:89] found id: ""
	I1213 18:46:09.346661   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.346671   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:09.346677   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:09.346742   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:09.381723   44722 cri.go:89] found id: ""
	I1213 18:46:09.381743   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.381750   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:09.381755   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:09.381842   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:09.415093   44722 cri.go:89] found id: ""
	I1213 18:46:09.415106   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.415113   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:09.415118   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:09.415178   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:09.440412   44722 cri.go:89] found id: ""
	I1213 18:46:09.440426   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.440433   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:09.440438   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:09.440495   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:09.469945   44722 cri.go:89] found id: ""
	I1213 18:46:09.469959   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.469965   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:09.469971   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:09.470037   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:09.495452   44722 cri.go:89] found id: ""
	I1213 18:46:09.495478   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.495486   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:09.495494   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:09.495505   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:09.507701   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:09.507716   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:09.577735   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:09.564499   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.564927   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571154   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571832   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.573056   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:09.564499   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.564927   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571154   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571832   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.573056   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:09.577745   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:09.577756   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:09.650543   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:09.650564   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:09.680040   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:09.680057   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:12.249315   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:12.259200   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:12.259257   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:12.284607   44722 cri.go:89] found id: ""
	I1213 18:46:12.284620   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.284627   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:12.284632   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:12.284697   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:12.318167   44722 cri.go:89] found id: ""
	I1213 18:46:12.318180   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.318187   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:12.318191   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:12.318249   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:12.361187   44722 cri.go:89] found id: ""
	I1213 18:46:12.361201   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.361208   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:12.361213   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:12.361270   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:12.396970   44722 cri.go:89] found id: ""
	I1213 18:46:12.396983   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.396990   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:12.396995   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:12.397098   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:12.423202   44722 cri.go:89] found id: ""
	I1213 18:46:12.423215   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.423222   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:12.423227   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:12.423286   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:12.448231   44722 cri.go:89] found id: ""
	I1213 18:46:12.448245   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.448252   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:12.448257   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:12.448314   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:12.477927   44722 cri.go:89] found id: ""
	I1213 18:46:12.477941   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.477949   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:12.477956   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:12.477966   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:12.547816   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:12.547834   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:12.559262   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:12.559280   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:12.622773   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:12.614428   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.615068   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.616576   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.617216   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.618857   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:12.614428   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.615068   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.616576   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.617216   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.618857   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:12.622783   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:12.622793   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:12.692295   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:12.692312   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:15.224550   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:15.235025   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:15.235085   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:15.261669   44722 cri.go:89] found id: ""
	I1213 18:46:15.261683   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.261690   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:15.261695   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:15.261755   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:15.290899   44722 cri.go:89] found id: ""
	I1213 18:46:15.290913   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.290920   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:15.290925   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:15.290979   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:15.317538   44722 cri.go:89] found id: ""
	I1213 18:46:15.317551   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.317558   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:15.317563   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:15.317621   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:15.359563   44722 cri.go:89] found id: ""
	I1213 18:46:15.359577   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.359584   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:15.359589   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:15.359645   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:15.395203   44722 cri.go:89] found id: ""
	I1213 18:46:15.395216   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.395223   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:15.395228   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:15.395288   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:15.428291   44722 cri.go:89] found id: ""
	I1213 18:46:15.428304   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.428311   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:15.428316   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:15.428372   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:15.453931   44722 cri.go:89] found id: ""
	I1213 18:46:15.453945   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.453951   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:15.453958   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:15.453969   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:15.521521   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:15.512931   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.513463   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515174   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515484   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.517840   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:15.512931   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.513463   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515174   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515484   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.517840   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:15.521531   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:15.521541   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:15.591139   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:15.591160   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:15.622465   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:15.622481   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:15.691330   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:15.691348   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:18.203416   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:18.213952   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:18.214025   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:18.239778   44722 cri.go:89] found id: ""
	I1213 18:46:18.239792   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.239808   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:18.239814   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:18.239879   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:18.264101   44722 cri.go:89] found id: ""
	I1213 18:46:18.264114   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.264121   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:18.264126   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:18.264185   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:18.289302   44722 cri.go:89] found id: ""
	I1213 18:46:18.289316   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.289323   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:18.289328   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:18.289386   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:18.316088   44722 cri.go:89] found id: ""
	I1213 18:46:18.316101   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.316108   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:18.316116   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:18.316174   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:18.351768   44722 cri.go:89] found id: ""
	I1213 18:46:18.351781   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.351788   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:18.351792   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:18.351846   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:18.382427   44722 cri.go:89] found id: ""
	I1213 18:46:18.382441   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.382447   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:18.382452   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:18.382509   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:18.410191   44722 cri.go:89] found id: ""
	I1213 18:46:18.410205   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.410212   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:18.410220   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:18.410230   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:18.473809   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:18.464747   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.465711   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467472   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467819   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.469591   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:18.464747   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.465711   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467472   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467819   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.469591   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:18.473819   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:18.473837   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:18.545360   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:18.545378   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:18.573170   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:18.573186   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:18.638179   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:18.638198   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:21.149461   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:21.159925   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:21.159987   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:21.185083   44722 cri.go:89] found id: ""
	I1213 18:46:21.185097   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.185104   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:21.185109   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:21.185169   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:21.210110   44722 cri.go:89] found id: ""
	I1213 18:46:21.210124   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.210131   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:21.210136   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:21.210199   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:21.235437   44722 cri.go:89] found id: ""
	I1213 18:46:21.235450   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.235457   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:21.235462   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:21.235518   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:21.264027   44722 cri.go:89] found id: ""
	I1213 18:46:21.264041   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.264061   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:21.264067   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:21.264134   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:21.291534   44722 cri.go:89] found id: ""
	I1213 18:46:21.291548   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.291567   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:21.291571   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:21.291638   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:21.321987   44722 cri.go:89] found id: ""
	I1213 18:46:21.322010   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.322018   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:21.322023   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:21.322088   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:21.354190   44722 cri.go:89] found id: ""
	I1213 18:46:21.354218   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.354225   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:21.354232   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:21.354242   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:21.432072   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:21.432092   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:21.443924   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:21.443941   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:21.512256   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:21.503676   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.504240   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506119   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506493   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.508024   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:21.503676   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.504240   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506119   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506493   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.508024   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:21.512269   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:21.512281   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:21.584867   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:21.584887   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:24.118323   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:24.129552   44722 kubeadm.go:602] duration metric: took 4m2.563511626s to restartPrimaryControlPlane
	W1213 18:46:24.129614   44722 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 18:46:24.129691   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 18:46:24.541036   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 18:46:24.553708   44722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 18:46:24.561742   44722 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 18:46:24.561810   44722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:46:24.569735   44722 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 18:46:24.569745   44722 kubeadm.go:158] found existing configuration files:
	
	I1213 18:46:24.569794   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:46:24.577570   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 18:46:24.577624   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 18:46:24.584990   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:46:24.592683   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 18:46:24.592744   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:46:24.600210   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:46:24.607772   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 18:46:24.607829   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:46:24.615311   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:46:24.623206   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 18:46:24.623270   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:46:24.631351   44722 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 18:46:24.746076   44722 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 18:46:24.746546   44722 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 18:46:24.812383   44722 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 18:50:26.971755   44722 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 18:50:26.971788   44722 kubeadm.go:319] 
	I1213 18:50:26.971891   44722 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 18:50:26.975722   44722 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 18:50:26.975775   44722 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 18:50:26.975864   44722 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 18:50:26.975918   44722 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 18:50:26.975952   44722 kubeadm.go:319] OS: Linux
	I1213 18:50:26.975995   44722 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 18:50:26.976042   44722 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 18:50:26.976088   44722 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 18:50:26.976134   44722 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 18:50:26.976181   44722 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 18:50:26.976228   44722 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 18:50:26.976271   44722 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 18:50:26.976318   44722 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 18:50:26.976374   44722 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 18:50:26.976446   44722 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 18:50:26.976550   44722 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 18:50:26.976642   44722 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 18:50:26.976705   44722 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 18:50:26.979839   44722 out.go:252]   - Generating certificates and keys ...
	I1213 18:50:26.979929   44722 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 18:50:26.979994   44722 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 18:50:26.980071   44722 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 18:50:26.980130   44722 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 18:50:26.980204   44722 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 18:50:26.980256   44722 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 18:50:26.980323   44722 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 18:50:26.980389   44722 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 18:50:26.980463   44722 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 18:50:26.980534   44722 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 18:50:26.980570   44722 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 18:50:26.980625   44722 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 18:50:26.980698   44722 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 18:50:26.980766   44722 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 18:50:26.980827   44722 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 18:50:26.980893   44722 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 18:50:26.980947   44722 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 18:50:26.981062   44722 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 18:50:26.981134   44722 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 18:50:26.984046   44722 out.go:252]   - Booting up control plane ...
	I1213 18:50:26.984213   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 18:50:26.984302   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 18:50:26.984406   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 18:50:26.984526   44722 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 18:50:26.984621   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 18:50:26.984728   44722 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 18:50:26.984811   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 18:50:26.984849   44722 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 18:50:26.984978   44722 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 18:50:26.985109   44722 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 18:50:26.985193   44722 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000261471s
	I1213 18:50:26.985199   44722 kubeadm.go:319] 
	I1213 18:50:26.985265   44722 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 18:50:26.985304   44722 kubeadm.go:319] 	- The kubelet is not running
	I1213 18:50:26.985407   44722 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 18:50:26.985410   44722 kubeadm.go:319] 
	I1213 18:50:26.985524   44722 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 18:50:26.985559   44722 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 18:50:26.985594   44722 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 18:50:26.985645   44722 kubeadm.go:319] 
	W1213 18:50:26.985723   44722 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000261471s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 18:50:26.989121   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 18:50:27.401657   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 18:50:27.414174   44722 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 18:50:27.414227   44722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:50:27.422069   44722 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 18:50:27.422079   44722 kubeadm.go:158] found existing configuration files:
	
	I1213 18:50:27.422131   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:50:27.429688   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 18:50:27.429740   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 18:50:27.436848   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:50:27.444475   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 18:50:27.444539   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:50:27.451626   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:50:27.458858   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 18:50:27.458912   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:50:27.466216   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:50:27.473793   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 18:50:27.473846   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:50:27.481268   44722 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 18:50:27.532748   44722 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 18:50:27.532805   44722 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 18:50:27.602576   44722 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 18:50:27.602639   44722 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 18:50:27.602674   44722 kubeadm.go:319] OS: Linux
	I1213 18:50:27.602718   44722 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 18:50:27.602765   44722 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 18:50:27.602811   44722 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 18:50:27.602858   44722 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 18:50:27.602905   44722 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 18:50:27.602952   44722 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 18:50:27.602996   44722 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 18:50:27.603043   44722 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 18:50:27.603088   44722 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 18:50:27.670270   44722 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 18:50:27.670407   44722 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 18:50:27.670497   44722 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 18:50:27.681577   44722 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 18:50:27.686860   44722 out.go:252]   - Generating certificates and keys ...
	I1213 18:50:27.686961   44722 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 18:50:27.687031   44722 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 18:50:27.687115   44722 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 18:50:27.687184   44722 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 18:50:27.687264   44722 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 18:50:27.687325   44722 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 18:50:27.687398   44722 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 18:50:27.687471   44722 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 18:50:27.687593   44722 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 18:50:27.687675   44722 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 18:50:27.687715   44722 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 18:50:27.687778   44722 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 18:50:28.283128   44722 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 18:50:28.400218   44722 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 18:50:28.813695   44722 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 18:50:29.036602   44722 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 18:50:29.078002   44722 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 18:50:29.078680   44722 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 18:50:29.081273   44722 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 18:50:29.084492   44722 out.go:252]   - Booting up control plane ...
	I1213 18:50:29.084588   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 18:50:29.084675   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 18:50:29.086298   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 18:50:29.101051   44722 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 18:50:29.101487   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 18:50:29.109109   44722 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 18:50:29.109586   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 18:50:29.109636   44722 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 18:50:29.237458   44722 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 18:50:29.237571   44722 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 18:54:29.237512   44722 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000245862s
	I1213 18:54:29.237544   44722 kubeadm.go:319] 
	I1213 18:54:29.237597   44722 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 18:54:29.237627   44722 kubeadm.go:319] 	- The kubelet is not running
	I1213 18:54:29.237724   44722 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 18:54:29.237728   44722 kubeadm.go:319] 
	I1213 18:54:29.237836   44722 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 18:54:29.237865   44722 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 18:54:29.237893   44722 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 18:54:29.237896   44722 kubeadm.go:319] 
	I1213 18:54:29.241945   44722 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 18:54:29.242401   44722 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 18:54:29.242519   44722 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 18:54:29.242782   44722 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 18:54:29.242790   44722 kubeadm.go:319] 
	I1213 18:54:29.242854   44722 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 18:54:29.242916   44722 kubeadm.go:403] duration metric: took 12m7.716453663s to StartCluster
	I1213 18:54:29.242947   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:54:29.243009   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:54:29.267936   44722 cri.go:89] found id: ""
	I1213 18:54:29.267953   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.267960   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:54:29.267966   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:54:29.268023   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:54:29.295961   44722 cri.go:89] found id: ""
	I1213 18:54:29.295975   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.295982   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:54:29.295987   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:54:29.296049   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:54:29.321287   44722 cri.go:89] found id: ""
	I1213 18:54:29.321301   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.321308   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:54:29.321313   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:54:29.321369   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:54:29.346752   44722 cri.go:89] found id: ""
	I1213 18:54:29.346766   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.346773   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:54:29.346778   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:54:29.346840   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:54:29.373200   44722 cri.go:89] found id: ""
	I1213 18:54:29.373214   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.373222   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:54:29.373227   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:54:29.373284   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:54:29.399377   44722 cri.go:89] found id: ""
	I1213 18:54:29.399390   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.399397   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:54:29.399403   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:54:29.399459   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:54:29.427837   44722 cri.go:89] found id: ""
	I1213 18:54:29.427851   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.427867   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:54:29.427876   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:54:29.427886   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:54:29.456109   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:54:29.456125   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:54:29.522138   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:54:29.522156   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:54:29.533671   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:54:29.533686   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:54:29.610367   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:54:29.601277   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.601976   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.603577   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.604094   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.605709   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:54:29.601277   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.601976   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.603577   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.604094   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.605709   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:54:29.610381   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:54:29.610392   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 18:54:29.688966   44722 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 18:54:29.689015   44722 out.go:285] * 
	W1213 18:54:29.689125   44722 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 18:54:29.689180   44722 out.go:285] * 
	W1213 18:54:29.691288   44722 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:54:29.696180   44722 out.go:203] 
	W1213 18:54:29.699069   44722 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 18:54:29.699113   44722 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 18:54:29.699131   44722 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 18:54:29.702236   44722 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862047918Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862080558Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862126375Z" level=info msg="Create NRI interface"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862224993Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.86223278Z" level=info msg="runtime interface created"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862244013Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862251471Z" level=info msg="runtime interface starting up..."
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862256895Z" level=info msg="starting plugins..."
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862268768Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 18:42:19 functional-752103 crio[9949]: time="2025-12-13T18:42:19.862331636Z" level=info msg="No systemd watchdog enabled"
	Dec 13 18:42:19 functional-752103 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.818362642Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=dc50dc13-71bf-495d-a717-281bc180f2f6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.819294668Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=d8721ade-dce9-4153-a322-5ccd7819b97b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.81975854Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=490f044a-8303-4886-ba98-7360ebf1ca73 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.820179047Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=12624e30-2525-4636-9934-824ea63a04cd name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.82056529Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1e7d135f-0cd8-4d54-96f0-f28f4e7904d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.820930436Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=30771d6d-e5fc-49d6-aff6-138912d2988b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.821514235Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c45e1d7a-3ddb-41a5-9415-d5a2464cfd2b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.674061922Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=54566989-a940-4ea0-9cb7-11a5ead5fdab name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.67476674Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=9907b75f-aebf-4fc7-948f-3e37eff08342 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675335917Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a5823f6b-c128-468c-ad19-87c38dcb3493 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675801504Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=eb5c5b0d-734a-42c7-beea-2ae04458cd2c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676236125Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=dc8b8dc3-cec8-44a2-afbb-932c674af235 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676718434Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=fae4abe6-592a-492b-809b-edd01682c93f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.677348338Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=21883f8b-9b90-4bb8-9843-c91d88abb931 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:54:33.286497   21411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:33.287154   21411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:33.288832   21411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:33.289422   21411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:33.291007   21411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	[Dec13 18:42] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 18:54:33 up  1:37,  0 user,  load average: 0.22, 0.22, 0.30
	Linux functional-752103 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 18:54:31 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:31 functional-752103 kubelet[21278]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:31 functional-752103 kubelet[21278]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:31 functional-752103 kubelet[21278]: E1213 18:54:31.166054   21278 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:54:31 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:54:31 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:54:31 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 13 18:54:31 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:31 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:31 functional-752103 kubelet[21289]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:31 functional-752103 kubelet[21289]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:31 functional-752103 kubelet[21289]: E1213 18:54:31.898847   21289 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:54:31 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:54:31 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:54:32 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 13 18:54:32 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:32 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:32 functional-752103 kubelet[21325]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:32 functional-752103 kubelet[21325]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:32 functional-752103 kubelet[21325]: E1213 18:54:32.636021   21325 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:54:32 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:54:32 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:54:33 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 13 18:54:33 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:33 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103: exit status 2 (408.915483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-752103" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-752103 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-752103 apply -f testdata/invalidsvc.yaml: exit status 1 (57.113924ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-752103 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-752103 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-752103 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-752103 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-752103 --alsologtostderr -v=1] stderr:
I1213 18:56:52.070181   63757 out.go:360] Setting OutFile to fd 1 ...
I1213 18:56:52.070351   63757 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:52.070371   63757 out.go:374] Setting ErrFile to fd 2...
I1213 18:56:52.070387   63757 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:52.070654   63757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
I1213 18:56:52.070928   63757 mustload.go:66] Loading cluster: functional-752103
I1213 18:56:52.071366   63757 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 18:56:52.071911   63757 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
I1213 18:56:52.093830   63757 host.go:66] Checking if "functional-752103" exists ...
I1213 18:56:52.094195   63757 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 18:56:52.177787   63757 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:56:52.168560024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 18:56:52.177895   63757 api_server.go:166] Checking apiserver status ...
I1213 18:56:52.177960   63757 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 18:56:52.178005   63757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
I1213 18:56:52.195321   63757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
W1213 18:56:52.302874   63757 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 18:56:52.306362   63757 out.go:179] * The control-plane node functional-752103 apiserver is not running: (state=Stopped)
I1213 18:56:52.309523   63757 out.go:179]   To start a cluster, run: "minikube start -p functional-752103"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-752103
helpers_test.go:244: (dbg) docker inspect functional-752103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	        "Created": "2025-12-13T18:27:36.869398923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:27:36.933863328Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hosts",
	        "LogPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b-json.log",
	        "Name": "/functional-752103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-752103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-752103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	                "LowerDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-752103",
	                "Source": "/var/lib/docker/volumes/functional-752103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-752103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-752103",
	                "name.minikube.sigs.k8s.io": "functional-752103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625ea12887c8956887678f2408d6edd5b98f62bce458a6906f4f662a3001a53b",
	            "SandboxKey": "/var/run/docker/netns/625ea12887c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-752103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:2c:83:4a:30:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84df48e9f7dac8c6a1b67709e5eea216d99d3f16eb50b96c7f0e4a82b3193d56",
	                    "EndpointID": "e69b1f9610d40396647a2d78f0170c31b9cd8e641fc8465e742649cccee8e591",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-752103",
	                        "d72b547cdcc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103: exit status 2 (319.97785ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-752103 service hello-node --url                                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ start     │ -p functional-752103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ start     │ -p functional-752103 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ mount     │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4261349897/001:/mount-9p --alsologtostderr -v=1              │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh       │ functional-752103 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh       │ functional-752103 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh       │ functional-752103 ssh -- ls -la /mount-9p                                                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh       │ functional-752103 ssh cat /mount-9p/test-1765652202838952248                                                                                        │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh       │ functional-752103 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh       │ functional-752103 ssh sudo umount -f /mount-9p                                                                                                      │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ mount     │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3505430281/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh       │ functional-752103 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh       │ functional-752103 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh       │ functional-752103 ssh -- ls -la /mount-9p                                                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh       │ functional-752103 ssh sudo umount -f /mount-9p                                                                                                      │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh       │ functional-752103 ssh findmnt -T /mount1                                                                                                            │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ mount     │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount1 --alsologtostderr -v=1                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ mount     │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount2 --alsologtostderr -v=1                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ mount     │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount3 --alsologtostderr -v=1                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh       │ functional-752103 ssh findmnt -T /mount1                                                                                                            │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh       │ functional-752103 ssh findmnt -T /mount2                                                                                                            │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh       │ functional-752103 ssh findmnt -T /mount3                                                                                                            │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ mount     │ -p functional-752103 --kill=true                                                                                                                    │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ start     │ -p functional-752103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-752103 --alsologtostderr -v=1                                                                                      │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:56:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:56:51.866136   63710 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:56:51.866293   63710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:56:51.866306   63710 out.go:374] Setting ErrFile to fd 2...
	I1213 18:56:51.866312   63710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:56:51.866680   63710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:56:51.867086   63710 out.go:368] Setting JSON to false
	I1213 18:56:51.867979   63710 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5964,"bootTime":1765646248,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:56:51.868051   63710 start.go:143] virtualization:  
	I1213 18:56:51.873271   63710 out.go:179] * [functional-752103] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 18:56:51.876287   63710 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:56:51.876379   63710 notify.go:221] Checking for updates...
	I1213 18:56:51.882774   63710 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:56:51.885834   63710 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:56:51.888894   63710 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:56:51.891868   63710 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:56:51.894781   63710 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:56:51.898170   63710 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:56:51.898807   63710 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:56:51.935030   63710 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:56:51.935207   63710 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:56:51.996198   63710 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:56:51.98661626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:56:51.997457   63710 docker.go:319] overlay module found
	I1213 18:56:52.001965   63710 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 18:56:52.005039   63710 start.go:309] selected driver: docker
	I1213 18:56:52.005084   63710 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:56:52.005188   63710 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:56:52.008846   63710 out.go:203] 
	W1213 18:56:52.011880   63710 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 18:56:52.014872   63710 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.674061922Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=54566989-a940-4ea0-9cb7-11a5ead5fdab name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.67476674Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=9907b75f-aebf-4fc7-948f-3e37eff08342 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675335917Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a5823f6b-c128-468c-ad19-87c38dcb3493 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675801504Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=eb5c5b0d-734a-42c7-beea-2ae04458cd2c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676236125Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=dc8b8dc3-cec8-44a2-afbb-932c674af235 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676718434Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=fae4abe6-592a-492b-809b-edd01682c93f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.677348338Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=21883f8b-9b90-4bb8-9843-c91d88abb931 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738192708Z" level=info msg="Checking image status: kicbase/echo-server:functional-752103" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738390305Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738442195Z" level=info msg="Image kicbase/echo-server:functional-752103 not found" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738517559Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-752103 found" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772733363Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-752103" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772935481Z" level=info msg="Image docker.io/kicbase/echo-server:functional-752103 not found" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772986583Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-752103 found" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.820407985Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-752103" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.820709337Z" level=info msg="Image localhost/kicbase/echo-server:functional-752103 not found" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.82083637Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-752103 found" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697715247Z" level=info msg="Checking image status: kicbase/echo-server:functional-752103" id=0cc0ac5d-dc21-442d-8110-bad1c5434563 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697864812Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697906043Z" level=info msg="Image kicbase/echo-server:functional-752103 not found" id=0cc0ac5d-dc21-442d-8110-bad1c5434563 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697969526Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-752103 found" id=0cc0ac5d-dc21-442d-8110-bad1c5434563 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.726188607Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-752103" id=ec1be44c-64b8-4c7c-9111-17fc0443252c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.726323311Z" level=info msg="Image docker.io/kicbase/echo-server:functional-752103 not found" id=ec1be44c-64b8-4c7c-9111-17fc0443252c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.726371377Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-752103 found" id=ec1be44c-64b8-4c7c-9111-17fc0443252c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.758302806Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-752103" id=f3f55714-9794-4e76-a331-e7982a0121c6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:56:53.388271   24130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:56:53.389041   24130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:56:53.390546   24130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:56:53.391100   24130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:56:53.392556   24130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	[Dec13 18:42] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 18:56:53 up  1:39,  0 user,  load average: 1.66, 0.60, 0.42
	Linux functional-752103 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 18:56:50 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:56:51 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1149.
	Dec 13 18:56:51 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:51 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:51 functional-752103 kubelet[23987]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:51 functional-752103 kubelet[23987]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:51 functional-752103 kubelet[23987]: E1213 18:56:51.380855   23987 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:56:51 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:56:51 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:56:52 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1150.
	Dec 13 18:56:52 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:52 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:52 functional-752103 kubelet[24017]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:52 functional-752103 kubelet[24017]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:52 functional-752103 kubelet[24017]: E1213 18:56:52.142633   24017 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:56:52 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:56:52 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:56:52 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1151.
	Dec 13 18:56:52 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:52 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:52 functional-752103 kubelet[24046]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:52 functional-752103 kubelet[24046]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:52 functional-752103 kubelet[24046]: E1213 18:56:52.895892   24046 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:56:52 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:56:52 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103: exit status 2 (329.339245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-752103" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (2.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 status: exit status 2 (295.486646ms)

                                                
                                                
-- stdout --
	functional-752103
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-752103 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (328.321833ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-752103 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 status -o json: exit status 2 (301.928994ms)

                                                
                                                
-- stdout --
	{"Name":"functional-752103","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-752103 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-752103
helpers_test.go:244: (dbg) docker inspect functional-752103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	        "Created": "2025-12-13T18:27:36.869398923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:27:36.933863328Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hosts",
	        "LogPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b-json.log",
	        "Name": "/functional-752103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-752103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-752103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	                "LowerDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-752103",
	                "Source": "/var/lib/docker/volumes/functional-752103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-752103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-752103",
	                "name.minikube.sigs.k8s.io": "functional-752103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625ea12887c8956887678f2408d6edd5b98f62bce458a6906f4f662a3001a53b",
	            "SandboxKey": "/var/run/docker/netns/625ea12887c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-752103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:2c:83:4a:30:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84df48e9f7dac8c6a1b67709e5eea216d99d3f16eb50b96c7f0e4a82b3193d56",
	                    "EndpointID": "e69b1f9610d40396647a2d78f0170c31b9cd8e641fc8465e742649cccee8e591",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-752103",
	                        "d72b547cdcc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103: exit status 2 (307.728998ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-752103 service --namespace=default --https --url hello-node                                                                              │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ service │ functional-752103 service hello-node --url --format={{.IP}}                                                                                         │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ service │ functional-752103 service hello-node --url                                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ start   │ -p functional-752103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ start   │ -p functional-752103 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ mount   │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4261349897/001:/mount-9p --alsologtostderr -v=1              │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh     │ functional-752103 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh     │ functional-752103 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh     │ functional-752103 ssh -- ls -la /mount-9p                                                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh     │ functional-752103 ssh cat /mount-9p/test-1765652202838952248                                                                                        │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh     │ functional-752103 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh     │ functional-752103 ssh sudo umount -f /mount-9p                                                                                                      │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ mount   │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3505430281/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh     │ functional-752103 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh     │ functional-752103 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh     │ functional-752103 ssh -- ls -la /mount-9p                                                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh     │ functional-752103 ssh sudo umount -f /mount-9p                                                                                                      │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh     │ functional-752103 ssh findmnt -T /mount1                                                                                                            │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ mount   │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount1 --alsologtostderr -v=1                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ mount   │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount2 --alsologtostderr -v=1                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ mount   │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount3 --alsologtostderr -v=1                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh     │ functional-752103 ssh findmnt -T /mount1                                                                                                            │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh     │ functional-752103 ssh findmnt -T /mount2                                                                                                            │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh     │ functional-752103 ssh findmnt -T /mount3                                                                                                            │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ mount   │ -p functional-752103 --kill=true                                                                                                                    │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:56:42
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:56:42.645266   61769 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:56:42.645550   61769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:56:42.645583   61769 out.go:374] Setting ErrFile to fd 2...
	I1213 18:56:42.645604   61769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:56:42.645899   61769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:56:42.646323   61769 out.go:368] Setting JSON to false
	I1213 18:56:42.647160   61769 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5955,"bootTime":1765646248,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:56:42.647265   61769 start.go:143] virtualization:  
	I1213 18:56:42.650524   61769 out.go:179] * [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:56:42.654315   61769 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:56:42.654407   61769 notify.go:221] Checking for updates...
	I1213 18:56:42.660061   61769 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:56:42.663007   61769 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:56:42.665909   61769 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:56:42.668745   61769 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:56:42.671690   61769 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:56:42.675067   61769 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:56:42.675703   61769 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:56:42.705127   61769 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:56:42.705338   61769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:56:42.762349   61769 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:56:42.753205967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:56:42.762457   61769 docker.go:319] overlay module found
	I1213 18:56:42.765512   61769 out.go:179] * Using the docker driver based on existing profile
	I1213 18:56:42.768349   61769 start.go:309] selected driver: docker
	I1213 18:56:42.768373   61769 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:56:42.768486   61769 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:56:42.768603   61769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:56:42.826354   61769 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:56:42.817592408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:56:42.826795   61769 cni.go:84] Creating CNI manager for ""
	I1213 18:56:42.826856   61769 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:56:42.826902   61769 start.go:353] cluster config:
	{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:56:42.830170   61769 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.674061922Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=54566989-a940-4ea0-9cb7-11a5ead5fdab name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.67476674Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=9907b75f-aebf-4fc7-948f-3e37eff08342 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675335917Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a5823f6b-c128-468c-ad19-87c38dcb3493 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675801504Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=eb5c5b0d-734a-42c7-beea-2ae04458cd2c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676236125Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=dc8b8dc3-cec8-44a2-afbb-932c674af235 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676718434Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=fae4abe6-592a-492b-809b-edd01682c93f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.677348338Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=21883f8b-9b90-4bb8-9843-c91d88abb931 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738192708Z" level=info msg="Checking image status: kicbase/echo-server:functional-752103" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738390305Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738442195Z" level=info msg="Image kicbase/echo-server:functional-752103 not found" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738517559Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-752103 found" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772733363Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-752103" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772935481Z" level=info msg="Image docker.io/kicbase/echo-server:functional-752103 not found" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772986583Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-752103 found" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.820407985Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-752103" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.820709337Z" level=info msg="Image localhost/kicbase/echo-server:functional-752103 not found" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.82083637Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-752103 found" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697715247Z" level=info msg="Checking image status: kicbase/echo-server:functional-752103" id=0cc0ac5d-dc21-442d-8110-bad1c5434563 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697864812Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697906043Z" level=info msg="Image kicbase/echo-server:functional-752103 not found" id=0cc0ac5d-dc21-442d-8110-bad1c5434563 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697969526Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-752103 found" id=0cc0ac5d-dc21-442d-8110-bad1c5434563 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.726188607Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-752103" id=ec1be44c-64b8-4c7c-9111-17fc0443252c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.726323311Z" level=info msg="Image docker.io/kicbase/echo-server:functional-752103 not found" id=ec1be44c-64b8-4c7c-9111-17fc0443252c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.726371377Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-752103 found" id=ec1be44c-64b8-4c7c-9111-17fc0443252c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.758302806Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-752103" id=f3f55714-9794-4e76-a331-e7982a0121c6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:56:51.373774   23982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:56:51.374679   23982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:56:51.376421   23982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:56:51.376714   23982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:56:51.378138   23982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	[Dec13 18:42] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 18:56:51 up  1:39,  0 user,  load average: 1.28, 0.51, 0.39
	Linux functional-752103 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 18:56:49 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:56:49 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1147.
	Dec 13 18:56:49 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:49 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:49 functional-752103 kubelet[23848]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:49 functional-752103 kubelet[23848]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:49 functional-752103 kubelet[23848]: E1213 18:56:49.899312   23848 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:56:49 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:56:49 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:56:50 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1148.
	Dec 13 18:56:50 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:50 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:50 functional-752103 kubelet[23885]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:50 functional-752103 kubelet[23885]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:50 functional-752103 kubelet[23885]: E1213 18:56:50.635115   23885 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:56:50 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:56:50 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:56:51 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1149.
	Dec 13 18:56:51 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:51 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:51 functional-752103 kubelet[23987]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:51 functional-752103 kubelet[23987]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:51 functional-752103 kubelet[23987]: E1213 18:56:51.380855   23987 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:56:51 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:56:51 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103: exit status 2 (356.117416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-752103" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (2.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-752103 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-752103 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (57.461136ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-752103 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-752103 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-752103 describe po hello-node-connect: exit status 1 (58.997795ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-752103 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-752103 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-752103 logs -l app=hello-node-connect: exit status 1 (59.35898ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-752103 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-752103 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-752103 describe svc hello-node-connect: exit status 1 (60.16528ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-752103 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-752103
helpers_test.go:244: (dbg) docker inspect functional-752103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	        "Created": "2025-12-13T18:27:36.869398923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:27:36.933863328Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hosts",
	        "LogPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b-json.log",
	        "Name": "/functional-752103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-752103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-752103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	                "LowerDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-752103",
	                "Source": "/var/lib/docker/volumes/functional-752103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-752103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-752103",
	                "name.minikube.sigs.k8s.io": "functional-752103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625ea12887c8956887678f2408d6edd5b98f62bce458a6906f4f662a3001a53b",
	            "SandboxKey": "/var/run/docker/netns/625ea12887c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-752103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:2c:83:4a:30:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84df48e9f7dac8c6a1b67709e5eea216d99d3f16eb50b96c7f0e4a82b3193d56",
	                    "EndpointID": "e69b1f9610d40396647a2d78f0170c31b9cd8e641fc8465e742649cccee8e591",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-752103",
	                        "d72b547cdcc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103: exit status 2 (329.105916ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-752103 image ls                                                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ image   │ functional-752103 image load --daemon kicbase/echo-server:functional-752103 --alsologtostderr                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ image   │ functional-752103 image ls                                                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ image   │ functional-752103 image load --daemon kicbase/echo-server:functional-752103 --alsologtostderr                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ ssh     │ functional-752103 ssh sudo cat /etc/ssl/certs/4637.pem                                                                                                    │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ ssh     │ functional-752103 ssh sudo cat /usr/share/ca-certificates/4637.pem                                                                                        │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ image   │ functional-752103 image ls                                                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ image   │ functional-752103 image save kicbase/echo-server:functional-752103 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ ssh     │ functional-752103 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ ssh     │ functional-752103 ssh sudo cat /etc/ssl/certs/46372.pem                                                                                                   │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ image   │ functional-752103 image rm kicbase/echo-server:functional-752103 --alsologtostderr                                                                        │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ ssh     │ functional-752103 ssh sudo cat /usr/share/ca-certificates/46372.pem                                                                                       │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ image   │ functional-752103 image ls                                                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ image   │ functional-752103 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ ssh     │ functional-752103 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ ssh     │ functional-752103 ssh sudo cat /etc/test/nested/copy/4637/hosts                                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ image   │ functional-752103 image ls                                                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ image   │ functional-752103 image save --daemon kicbase/echo-server:functional-752103 --alsologtostderr                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ ssh     │ functional-752103 ssh echo hello                                                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ ssh     │ functional-752103 ssh cat /etc/hostname                                                                                                                   │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ tunnel  │ functional-752103 tunnel --alsologtostderr                                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │                     │
	│ tunnel  │ functional-752103 tunnel --alsologtostderr                                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │                     │
	│ tunnel  │ functional-752103 tunnel --alsologtostderr                                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │                     │
	│ addons  │ functional-752103 addons list                                                                                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ addons  │ functional-752103 addons list -o json                                                                                                                     │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:42:16
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:42:16.832380   44722 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:42:16.832482   44722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:42:16.832486   44722 out.go:374] Setting ErrFile to fd 2...
	I1213 18:42:16.832490   44722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:42:16.832750   44722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:42:16.833154   44722 out.go:368] Setting JSON to false
	I1213 18:42:16.833990   44722 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5089,"bootTime":1765646248,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:42:16.834047   44722 start.go:143] virtualization:  
	I1213 18:42:16.838135   44722 out.go:179] * [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:42:16.841728   44722 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:42:16.841798   44722 notify.go:221] Checking for updates...
	I1213 18:42:16.848230   44722 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:42:16.851409   44722 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:42:16.854607   44722 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:42:16.857801   44722 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:42:16.860996   44722 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:42:16.864675   44722 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:42:16.864787   44722 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:42:16.894628   44722 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:42:16.894745   44722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:42:16.957351   44722 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 18:42:16.94760506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:42:16.957447   44722 docker.go:319] overlay module found
	I1213 18:42:16.960782   44722 out.go:179] * Using the docker driver based on existing profile
	I1213 18:42:16.963851   44722 start.go:309] selected driver: docker
	I1213 18:42:16.963862   44722 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:42:16.963972   44722 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:42:16.964069   44722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:42:17.021522   44722 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 18:42:17.012232642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:42:17.021951   44722 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 18:42:17.021974   44722 cni.go:84] Creating CNI manager for ""
	I1213 18:42:17.022024   44722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:42:17.022071   44722 start.go:353] cluster config:
	{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:42:17.025231   44722 out.go:179] * Starting "functional-752103" primary control-plane node in "functional-752103" cluster
	I1213 18:42:17.028293   44722 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:42:17.031266   44722 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:42:17.034129   44722 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:42:17.034163   44722 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 18:42:17.034171   44722 cache.go:65] Caching tarball of preloaded images
	I1213 18:42:17.034196   44722 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:42:17.034259   44722 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 18:42:17.034268   44722 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 18:42:17.034379   44722 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/config.json ...
	I1213 18:42:17.054759   44722 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 18:42:17.054770   44722 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 18:42:17.054784   44722 cache.go:243] Successfully downloaded all kic artifacts
	I1213 18:42:17.054813   44722 start.go:360] acquireMachinesLock for functional-752103: {Name:mkf4ec1d9e1836ef54983db4562aedfd1a9c51c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 18:42:17.054868   44722 start.go:364] duration metric: took 38.187µs to acquireMachinesLock for "functional-752103"
	I1213 18:42:17.054886   44722 start.go:96] Skipping create...Using existing machine configuration
	I1213 18:42:17.054891   44722 fix.go:54] fixHost starting: 
	I1213 18:42:17.055151   44722 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:42:17.071486   44722 fix.go:112] recreateIfNeeded on functional-752103: state=Running err=<nil>
	W1213 18:42:17.071504   44722 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 18:42:17.074803   44722 out.go:252] * Updating the running docker "functional-752103" container ...
	I1213 18:42:17.074833   44722 machine.go:94] provisionDockerMachine start ...
	I1213 18:42:17.074935   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.093274   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.093585   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.093591   44722 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 18:42:17.244524   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:42:17.244537   44722 ubuntu.go:182] provisioning hostname "functional-752103"
	I1213 18:42:17.244597   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.262380   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.262682   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.262690   44722 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-752103 && echo "functional-752103" | sudo tee /etc/hostname
	I1213 18:42:17.422688   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:42:17.422759   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.440827   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.441150   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.441163   44722 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-752103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-752103/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-752103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 18:42:17.593792   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 18:42:17.593821   44722 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 18:42:17.593841   44722 ubuntu.go:190] setting up certificates
	I1213 18:42:17.593861   44722 provision.go:84] configureAuth start
	I1213 18:42:17.593949   44722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:42:17.612231   44722 provision.go:143] copyHostCerts
	I1213 18:42:17.612297   44722 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 18:42:17.612304   44722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:42:17.612382   44722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 18:42:17.612525   44722 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 18:42:17.612528   44722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:42:17.612554   44722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 18:42:17.612619   44722 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 18:42:17.612622   44722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:42:17.612646   44722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 18:42:17.612700   44722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.functional-752103 san=[127.0.0.1 192.168.49.2 functional-752103 localhost minikube]
	I1213 18:42:17.675451   44722 provision.go:177] copyRemoteCerts
	I1213 18:42:17.675509   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 18:42:17.675551   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.693626   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:17.798419   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 18:42:17.816185   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 18:42:17.833700   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 18:42:17.853857   44722 provision.go:87] duration metric: took 259.975405ms to configureAuth
	I1213 18:42:17.853904   44722 ubuntu.go:206] setting minikube options for container-runtime
	I1213 18:42:17.854123   44722 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:42:17.854230   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.879965   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.880277   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.880288   44722 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 18:42:18.248633   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 18:42:18.248647   44722 machine.go:97] duration metric: took 1.173808025s to provisionDockerMachine
	I1213 18:42:18.248658   44722 start.go:293] postStartSetup for "functional-752103" (driver="docker")
	I1213 18:42:18.248669   44722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 18:42:18.248743   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 18:42:18.248792   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.266147   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.373221   44722 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 18:42:18.376713   44722 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 18:42:18.376729   44722 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 18:42:18.376740   44722 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 18:42:18.376791   44722 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 18:42:18.376867   44722 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 18:42:18.376940   44722 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> hosts in /etc/test/nested/copy/4637
	I1213 18:42:18.376981   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4637
	I1213 18:42:18.384622   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:42:18.402512   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts --> /etc/test/nested/copy/4637/hosts (40 bytes)
	I1213 18:42:18.419539   44722 start.go:296] duration metric: took 170.867557ms for postStartSetup
	I1213 18:42:18.419610   44722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 18:42:18.419664   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.436637   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.538189   44722 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 18:42:18.542827   44722 fix.go:56] duration metric: took 1.487930222s for fixHost
	I1213 18:42:18.542846   44722 start.go:83] releasing machines lock for "functional-752103", held for 1.487968187s
	I1213 18:42:18.542915   44722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:42:18.560389   44722 ssh_runner.go:195] Run: cat /version.json
	I1213 18:42:18.560434   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.560692   44722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 18:42:18.560748   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.583551   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.591018   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.701640   44722 ssh_runner.go:195] Run: systemctl --version
	I1213 18:42:18.800116   44722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 18:42:18.836359   44722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 18:42:18.840572   44722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 18:42:18.840646   44722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 18:42:18.848286   44722 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 18:42:18.848299   44722 start.go:496] detecting cgroup driver to use...
	I1213 18:42:18.848329   44722 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 18:42:18.848379   44722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 18:42:18.864054   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 18:42:18.878242   44722 docker.go:218] disabling cri-docker service (if available) ...
	I1213 18:42:18.878341   44722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 18:42:18.895499   44722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 18:42:18.910156   44722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 18:42:19.020039   44722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 18:42:19.142208   44722 docker.go:234] disabling docker service ...
	I1213 18:42:19.142263   44722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 18:42:19.158384   44722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 18:42:19.171631   44722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 18:42:19.293369   44722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 18:42:19.422037   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 18:42:19.435333   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 18:42:19.449327   44722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 18:42:19.449380   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.458689   44722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 18:42:19.458748   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.467502   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.476408   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.485815   44722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 18:42:19.494237   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.503335   44722 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.511920   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.520510   44722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 18:42:19.528006   44722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 18:42:19.535403   44722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:42:19.669317   44722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 18:42:19.868011   44722 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 18:42:19.868104   44722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 18:42:19.871850   44722 start.go:564] Will wait 60s for crictl version
	I1213 18:42:19.871906   44722 ssh_runner.go:195] Run: which crictl
	I1213 18:42:19.875387   44722 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 18:42:19.901618   44722 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 18:42:19.901703   44722 ssh_runner.go:195] Run: crio --version
	I1213 18:42:19.929436   44722 ssh_runner.go:195] Run: crio --version
	I1213 18:42:19.965392   44722 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 18:42:19.968348   44722 cli_runner.go:164] Run: docker network inspect functional-752103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:42:19.986389   44722 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 18:42:19.993243   44722 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 18:42:19.996095   44722 kubeadm.go:884] updating cluster {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 18:42:19.996213   44722 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:42:19.996291   44722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:42:20.057560   44722 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:42:20.057583   44722 crio.go:433] Images already preloaded, skipping extraction
	I1213 18:42:20.057640   44722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:42:20.089218   44722 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:42:20.089230   44722 cache_images.go:86] Images are preloaded, skipping loading
	I1213 18:42:20.089236   44722 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 18:42:20.089328   44722 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-752103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 18:42:20.089414   44722 ssh_runner.go:195] Run: crio config
	I1213 18:42:20.177167   44722 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 18:42:20.177187   44722 cni.go:84] Creating CNI manager for ""
	I1213 18:42:20.177196   44722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:42:20.177232   44722 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 18:42:20.177254   44722 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-752103 NodeName:functional-752103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 18:42:20.177418   44722 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-752103"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 18:42:20.177484   44722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 18:42:20.185578   44722 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 18:42:20.185638   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 18:42:20.192929   44722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 18:42:20.205146   44722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 18:42:20.217154   44722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1213 18:42:20.229717   44722 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 18:42:20.233247   44722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:42:20.353829   44722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:42:20.830403   44722 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103 for IP: 192.168.49.2
	I1213 18:42:20.830413   44722 certs.go:195] generating shared ca certs ...
	I1213 18:42:20.830433   44722 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:42:20.830617   44722 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 18:42:20.830683   44722 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 18:42:20.830690   44722 certs.go:257] generating profile certs ...
	I1213 18:42:20.830812   44722 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key
	I1213 18:42:20.830890   44722 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key.597c6026
	I1213 18:42:20.830949   44722 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key
	I1213 18:42:20.831080   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 18:42:20.831115   44722 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 18:42:20.831122   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 18:42:20.831151   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 18:42:20.831178   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 18:42:20.831204   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 18:42:20.831248   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:42:20.831981   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 18:42:20.856838   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 18:42:20.879274   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 18:42:20.903042   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 18:42:20.923306   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 18:42:20.942121   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 18:42:20.960173   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 18:42:20.977612   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 18:42:20.994747   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 18:42:21.015274   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 18:42:21.032852   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 18:42:21.049826   44722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 18:42:21.062502   44722 ssh_runner.go:195] Run: openssl version
	I1213 18:42:21.068589   44722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.075691   44722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 18:42:21.083152   44722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.086777   44722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.086838   44722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.127646   44722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 18:42:21.135282   44722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.142547   44722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 18:42:21.150436   44722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.154171   44722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.154226   44722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.195398   44722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 18:42:21.202918   44722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.210392   44722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 18:42:21.218018   44722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.221839   44722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.221907   44722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.262578   44722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 18:42:21.269897   44722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:42:21.273658   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 18:42:21.314538   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 18:42:21.355677   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 18:42:21.398275   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 18:42:21.439207   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 18:42:21.480256   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 18:42:21.526473   44722 kubeadm.go:401] StartCluster: {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:42:21.526551   44722 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:42:21.526617   44722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:42:21.557940   44722 cri.go:89] found id: ""
	I1213 18:42:21.558001   44722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 18:42:21.566021   44722 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 18:42:21.566031   44722 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 18:42:21.566081   44722 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 18:42:21.573603   44722 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.574106   44722 kubeconfig.go:125] found "functional-752103" server: "https://192.168.49.2:8441"
	I1213 18:42:21.575413   44722 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 18:42:21.585702   44722 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 18:27:45.810242505 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 18:42:20.222041116 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 18:42:21.585713   44722 kubeadm.go:1161] stopping kube-system containers ...
	I1213 18:42:21.585724   44722 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 18:42:21.585780   44722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:42:21.617768   44722 cri.go:89] found id: ""
	I1213 18:42:21.617827   44722 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 18:42:21.635403   44722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:42:21.643636   44722 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 18:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 18:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 18:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 13 18:31 /etc/kubernetes/scheduler.conf
	
	I1213 18:42:21.643708   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:42:21.651764   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:42:21.659161   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.659213   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:42:21.666555   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:42:21.674192   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.674247   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:42:21.681652   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:42:21.689753   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.689823   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:42:21.697372   44722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 18:42:21.705090   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:21.753330   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.314116   44722 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.560761972s)
	I1213 18:42:23.314176   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.523724   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.594421   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.642920   44722 api_server.go:52] waiting for apiserver process to appear ...
	I1213 18:42:23.642986   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:24.143977   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:24.643428   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:25.143550   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:25.643771   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:26.143193   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:26.643175   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:27.143974   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:27.643187   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:28.143912   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:28.643171   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:29.144072   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:29.644225   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:30.144075   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:30.643706   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:31.143172   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:31.643056   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:32.143628   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:32.643125   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:33.143827   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:33.643131   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:34.143247   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:34.643324   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:35.143141   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:35.643248   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:36.143915   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:36.644040   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:37.143715   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:37.643270   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:38.143997   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:38.643143   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:39.144023   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:39.643975   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:40.143050   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:40.643089   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:41.143722   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:41.643477   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:42.143838   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:42.643431   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:43.143175   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:43.643406   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:44.143895   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:44.643143   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:45.144217   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:45.644055   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:46.143137   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:46.644107   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:47.143996   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:47.643160   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:48.143815   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:48.643858   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:49.143166   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:49.644081   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:50.143765   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:50.643065   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:51.143582   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:51.643619   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:52.143220   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:52.643909   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:53.143832   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:53.643709   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:54.143426   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:54.643284   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:55.143992   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:55.643406   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:56.143943   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:56.643844   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:57.143618   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:57.643188   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:58.143857   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:58.643381   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:59.143183   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:59.643139   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:00.143730   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:00.643184   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:01.143789   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:01.643677   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:02.143883   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:02.643235   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:03.143175   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:03.643112   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:04.143893   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:04.643955   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:05.144057   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:05.643239   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:06.143229   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:06.643162   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:07.143132   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:07.643342   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:08.143161   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:08.643365   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:09.144023   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:09.643759   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:10.143925   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:10.644116   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:11.143184   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:11.643163   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:12.144081   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:12.643761   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:13.143171   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:13.643174   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:14.143070   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:14.643090   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:15.143762   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:15.643166   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:16.143069   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:16.644103   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:17.143993   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:17.643934   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:18.143216   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:18.643988   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:19.143982   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:19.643766   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:20.143191   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:20.644118   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:21.143094   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:21.644013   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:22.143973   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:22.643967   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:23.143991   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:23.643861   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:23.643960   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:23.674160   44722 cri.go:89] found id: ""
	I1213 18:43:23.674175   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.674182   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:23.674187   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:23.674245   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:23.700540   44722 cri.go:89] found id: ""
	I1213 18:43:23.700554   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.700561   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:23.700566   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:23.700624   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:23.726064   44722 cri.go:89] found id: ""
	I1213 18:43:23.726078   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.726084   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:23.726089   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:23.726148   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:23.752099   44722 cri.go:89] found id: ""
	I1213 18:43:23.752113   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.752120   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:23.752125   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:23.752190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:23.778105   44722 cri.go:89] found id: ""
	I1213 18:43:23.778120   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.778126   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:23.778131   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:23.778193   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:23.806032   44722 cri.go:89] found id: ""
	I1213 18:43:23.806047   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.806054   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:23.806059   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:23.806117   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:23.832635   44722 cri.go:89] found id: ""
	I1213 18:43:23.832649   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.832658   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:23.832667   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:23.832679   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:23.899244   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:23.899262   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:23.910777   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:23.910793   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:23.979546   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:23.970843   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.971479   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973158   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973794   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.975445   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:23.970843   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.971479   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973158   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973794   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.975445   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:23.979557   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:23.979567   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:24.055422   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:24.055441   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:26.587216   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:26.602744   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:26.602803   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:26.637528   44722 cri.go:89] found id: ""
	I1213 18:43:26.637543   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.637550   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:26.637555   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:26.637627   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:26.668738   44722 cri.go:89] found id: ""
	I1213 18:43:26.668752   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.668759   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:26.668764   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:26.668820   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:26.694813   44722 cri.go:89] found id: ""
	I1213 18:43:26.694827   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.694834   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:26.694839   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:26.694903   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:26.724152   44722 cri.go:89] found id: ""
	I1213 18:43:26.724165   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.724172   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:26.724177   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:26.724234   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:26.753666   44722 cri.go:89] found id: ""
	I1213 18:43:26.753680   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.753687   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:26.753692   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:26.753751   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:26.778797   44722 cri.go:89] found id: ""
	I1213 18:43:26.778810   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.778817   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:26.778822   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:26.778878   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:26.804095   44722 cri.go:89] found id: ""
	I1213 18:43:26.804108   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.804121   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:26.804128   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:26.804139   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:26.872610   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:26.863726   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.864249   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.865989   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.866485   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.868188   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:26.863726   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.864249   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.865989   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.866485   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.868188   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:26.872619   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:26.872629   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:26.941929   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:26.941948   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:26.969504   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:26.969520   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:27.036106   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:27.036126   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:29.549238   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:29.561563   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:29.561629   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:29.595212   44722 cri.go:89] found id: ""
	I1213 18:43:29.595227   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.595234   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:29.595239   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:29.595298   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:29.632368   44722 cri.go:89] found id: ""
	I1213 18:43:29.632382   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.632388   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:29.632393   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:29.632450   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:29.661185   44722 cri.go:89] found id: ""
	I1213 18:43:29.661199   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.661206   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:29.661211   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:29.661271   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:29.686961   44722 cri.go:89] found id: ""
	I1213 18:43:29.686974   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.686981   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:29.686986   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:29.687049   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:29.713104   44722 cri.go:89] found id: ""
	I1213 18:43:29.713118   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.713125   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:29.713130   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:29.713190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:29.738029   44722 cri.go:89] found id: ""
	I1213 18:43:29.738042   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.738049   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:29.738054   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:29.738116   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:29.763765   44722 cri.go:89] found id: ""
	I1213 18:43:29.763779   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.763785   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:29.763793   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:29.763803   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:29.829845   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:29.829864   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:29.841137   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:29.841153   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:29.910214   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:29.900921   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.902099   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903031   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903808   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.904683   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:29.900921   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.902099   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903031   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903808   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.904683   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:29.910238   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:29.910251   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:29.979995   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:29.980012   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:32.559824   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:32.569836   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:32.569896   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:32.598661   44722 cri.go:89] found id: ""
	I1213 18:43:32.598675   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.598682   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:32.598687   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:32.598741   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:32.629547   44722 cri.go:89] found id: ""
	I1213 18:43:32.629562   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.629568   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:32.629573   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:32.629650   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:32.654825   44722 cri.go:89] found id: ""
	I1213 18:43:32.654839   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.654846   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:32.654851   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:32.654908   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:32.680611   44722 cri.go:89] found id: ""
	I1213 18:43:32.680625   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.680632   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:32.680637   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:32.680695   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:32.706618   44722 cri.go:89] found id: ""
	I1213 18:43:32.706632   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.706639   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:32.706643   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:32.706702   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:32.730958   44722 cri.go:89] found id: ""
	I1213 18:43:32.730971   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.730978   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:32.730983   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:32.731052   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:32.759159   44722 cri.go:89] found id: ""
	I1213 18:43:32.759172   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.759179   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:32.759186   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:32.759196   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:32.824778   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:32.824797   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:32.835474   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:32.835491   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:32.898129   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:32.889603   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.890366   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.891862   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.892440   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.893974   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:32.889603   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.890366   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.891862   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.892440   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.893974   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:32.898149   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:32.898160   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:32.970010   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:32.970027   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:35.499162   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:35.510104   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:35.510168   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:35.536034   44722 cri.go:89] found id: ""
	I1213 18:43:35.536054   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.536061   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:35.536066   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:35.536125   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:35.560363   44722 cri.go:89] found id: ""
	I1213 18:43:35.560377   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.560384   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:35.560389   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:35.560447   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:35.595466   44722 cri.go:89] found id: ""
	I1213 18:43:35.595480   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.595486   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:35.595491   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:35.595546   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:35.626296   44722 cri.go:89] found id: ""
	I1213 18:43:35.626310   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.626316   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:35.626321   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:35.626376   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:35.653200   44722 cri.go:89] found id: ""
	I1213 18:43:35.653214   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.653221   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:35.653225   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:35.653322   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:35.678439   44722 cri.go:89] found id: ""
	I1213 18:43:35.678453   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.678459   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:35.678464   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:35.678525   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:35.703934   44722 cri.go:89] found id: ""
	I1213 18:43:35.703948   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.703954   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:35.703962   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:35.703972   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:35.769879   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:35.769897   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:35.781228   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:35.781245   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:35.848304   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:35.840026   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.840682   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842398   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842978   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.844548   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:35.840026   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.840682   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842398   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842978   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.844548   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:35.848316   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:35.848327   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:35.917611   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:35.917630   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:38.449407   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:38.459447   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:38.459504   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:38.485144   44722 cri.go:89] found id: ""
	I1213 18:43:38.485156   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.485163   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:38.485179   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:38.485241   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:38.513966   44722 cri.go:89] found id: ""
	I1213 18:43:38.513980   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.513987   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:38.513992   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:38.514050   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:38.540044   44722 cri.go:89] found id: ""
	I1213 18:43:38.540058   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.540065   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:38.540070   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:38.540128   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:38.570046   44722 cri.go:89] found id: ""
	I1213 18:43:38.570060   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.570067   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:38.570072   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:38.570131   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:38.602431   44722 cri.go:89] found id: ""
	I1213 18:43:38.602444   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.602451   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:38.602456   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:38.602513   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:38.631212   44722 cri.go:89] found id: ""
	I1213 18:43:38.631226   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.631233   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:38.631238   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:38.631295   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:38.658361   44722 cri.go:89] found id: ""
	I1213 18:43:38.658375   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.658383   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:38.658391   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:38.658401   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:38.728418   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:38.728436   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:38.739710   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:38.739726   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:38.807705   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:38.799135   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.799833   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.801634   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.802286   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.803965   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:38.799135   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.799833   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.801634   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.802286   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.803965   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:38.807715   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:38.807726   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:38.876773   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:38.876792   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:41.406031   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:41.416061   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:41.416122   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:41.441164   44722 cri.go:89] found id: ""
	I1213 18:43:41.441178   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.441184   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:41.441189   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:41.441246   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:41.468283   44722 cri.go:89] found id: ""
	I1213 18:43:41.468296   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.468303   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:41.468313   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:41.468369   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:41.492435   44722 cri.go:89] found id: ""
	I1213 18:43:41.492449   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.492456   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:41.492461   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:41.492525   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:41.517861   44722 cri.go:89] found id: ""
	I1213 18:43:41.517874   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.517881   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:41.517886   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:41.517946   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:41.542334   44722 cri.go:89] found id: ""
	I1213 18:43:41.542348   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.542354   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:41.542359   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:41.542420   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:41.566791   44722 cri.go:89] found id: ""
	I1213 18:43:41.566805   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.566812   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:41.566817   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:41.566873   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:41.605333   44722 cri.go:89] found id: ""
	I1213 18:43:41.605347   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.605353   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:41.605361   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:41.605372   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:41.685285   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:41.685307   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:41.719016   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:41.719031   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:41.784620   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:41.784638   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:41.797084   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:41.797099   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:41.863425   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:41.855920   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.856329   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.857901   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.858215   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.859646   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:41.855920   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.856329   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.857901   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.858215   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.859646   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:44.365147   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:44.375234   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:44.375292   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:44.404071   44722 cri.go:89] found id: ""
	I1213 18:43:44.404084   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.404091   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:44.404100   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:44.404159   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:44.429141   44722 cri.go:89] found id: ""
	I1213 18:43:44.429154   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.429161   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:44.429166   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:44.429235   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:44.453307   44722 cri.go:89] found id: ""
	I1213 18:43:44.453321   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.453328   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:44.453332   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:44.453409   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:44.478549   44722 cri.go:89] found id: ""
	I1213 18:43:44.478563   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.478570   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:44.478576   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:44.478636   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:44.504258   44722 cri.go:89] found id: ""
	I1213 18:43:44.504272   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.504278   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:44.504283   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:44.504340   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:44.528573   44722 cri.go:89] found id: ""
	I1213 18:43:44.528587   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.528594   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:44.528599   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:44.528655   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:44.553529   44722 cri.go:89] found id: ""
	I1213 18:43:44.553555   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.553562   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:44.553570   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:44.553581   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:44.591322   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:44.591339   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:44.676235   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:44.676264   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:44.687308   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:44.687333   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:44.749534   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:44.740808   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.741545   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743186   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743511   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.745093   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:44.740808   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.741545   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743186   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743511   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.745093   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:44.749567   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:44.749577   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:47.317951   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:47.328222   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:47.328296   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:47.357484   44722 cri.go:89] found id: ""
	I1213 18:43:47.357498   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.357515   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:47.357521   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:47.357593   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:47.388340   44722 cri.go:89] found id: ""
	I1213 18:43:47.388354   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.388362   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:47.388367   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:47.388431   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:47.412714   44722 cri.go:89] found id: ""
	I1213 18:43:47.412726   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.412733   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:47.412738   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:47.412794   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:47.437349   44722 cri.go:89] found id: ""
	I1213 18:43:47.437363   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.437369   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:47.437374   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:47.437432   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:47.461369   44722 cri.go:89] found id: ""
	I1213 18:43:47.461383   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.461390   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:47.461395   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:47.461454   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:47.494140   44722 cri.go:89] found id: ""
	I1213 18:43:47.494154   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.494161   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:47.494166   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:47.494223   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:47.519020   44722 cri.go:89] found id: ""
	I1213 18:43:47.519033   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.519040   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:47.519047   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:47.519060   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:47.587741   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:47.587760   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:47.623942   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:47.623957   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:47.696440   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:47.696459   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:47.707187   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:47.707203   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:47.769911   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:47.762074   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.762544   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764216   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764680   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.766131   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:47.762074   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.762544   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764216   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764680   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.766131   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:50.270188   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:50.280132   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:50.280190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:50.308672   44722 cri.go:89] found id: ""
	I1213 18:43:50.308686   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.308693   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:50.308699   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:50.308758   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:50.335996   44722 cri.go:89] found id: ""
	I1213 18:43:50.336010   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.336016   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:50.336021   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:50.336080   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:50.361733   44722 cri.go:89] found id: ""
	I1213 18:43:50.361746   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.361753   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:50.361758   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:50.361816   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:50.387122   44722 cri.go:89] found id: ""
	I1213 18:43:50.387137   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.387143   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:50.387148   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:50.387204   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:50.411746   44722 cri.go:89] found id: ""
	I1213 18:43:50.411760   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.411766   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:50.411771   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:50.411828   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:50.439079   44722 cri.go:89] found id: ""
	I1213 18:43:50.439093   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.439100   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:50.439104   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:50.439158   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:50.464264   44722 cri.go:89] found id: ""
	I1213 18:43:50.464278   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.464285   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:50.464293   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:50.464303   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:50.530938   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:50.530956   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:50.541880   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:50.541897   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:50.622277   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:50.613287   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.613702   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615208   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615836   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.616931   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:50.613287   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.613702   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615208   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615836   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.616931   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:50.622299   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:50.622311   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:50.693744   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:50.693765   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:53.224830   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:53.235168   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:53.235224   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:53.261284   44722 cri.go:89] found id: ""
	I1213 18:43:53.261297   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.261304   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:53.261309   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:53.261369   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:53.287104   44722 cri.go:89] found id: ""
	I1213 18:43:53.287118   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.287125   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:53.287136   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:53.287197   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:53.312612   44722 cri.go:89] found id: ""
	I1213 18:43:53.312626   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.312636   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:53.312641   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:53.312700   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:53.338548   44722 cri.go:89] found id: ""
	I1213 18:43:53.338562   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.338570   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:53.338575   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:53.338634   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:53.363849   44722 cri.go:89] found id: ""
	I1213 18:43:53.363862   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.363869   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:53.363874   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:53.363933   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:53.388677   44722 cri.go:89] found id: ""
	I1213 18:43:53.388693   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.388700   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:53.388707   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:53.388764   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:53.413384   44722 cri.go:89] found id: ""
	I1213 18:43:53.413398   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.413405   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:53.413412   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:53.413426   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:53.480895   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:53.480915   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:53.510174   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:53.510191   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:53.579252   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:53.579272   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:53.594356   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:53.594373   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:53.674807   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:53.667137   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.667568   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669097   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669497   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.670996   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:53.667137   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.667568   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669097   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669497   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.670996   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:56.175034   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:56.185031   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:56.185091   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:56.210252   44722 cri.go:89] found id: ""
	I1213 18:43:56.210266   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.210273   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:56.210289   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:56.210345   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:56.238190   44722 cri.go:89] found id: ""
	I1213 18:43:56.238204   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.238211   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:56.238216   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:56.238280   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:56.262334   44722 cri.go:89] found id: ""
	I1213 18:43:56.262361   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.262368   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:56.262374   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:56.262439   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:56.286668   44722 cri.go:89] found id: ""
	I1213 18:43:56.286681   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.286688   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:56.286693   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:56.286753   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:56.312401   44722 cri.go:89] found id: ""
	I1213 18:43:56.312426   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.312434   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:56.312439   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:56.312514   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:56.337419   44722 cri.go:89] found id: ""
	I1213 18:43:56.337433   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.337440   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:56.337446   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:56.337512   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:56.363240   44722 cri.go:89] found id: ""
	I1213 18:43:56.363252   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.363259   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:56.363274   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:56.363285   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:56.427558   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:56.427576   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:56.438948   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:56.438963   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:56.504100   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:56.496063   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.496558   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498109   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498537   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.500111   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:56.496063   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.496558   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498109   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498537   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.500111   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:56.504110   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:56.504121   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:56.576300   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:56.576319   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:59.120724   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:59.131483   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:59.131541   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:59.161664   44722 cri.go:89] found id: ""
	I1213 18:43:59.161677   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.161684   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:59.161689   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:59.161747   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:59.186541   44722 cri.go:89] found id: ""
	I1213 18:43:59.186554   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.186561   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:59.186566   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:59.186631   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:59.214613   44722 cri.go:89] found id: ""
	I1213 18:43:59.214627   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.214634   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:59.214639   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:59.214696   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:59.239790   44722 cri.go:89] found id: ""
	I1213 18:43:59.239803   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.239810   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:59.239815   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:59.239881   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:59.268177   44722 cri.go:89] found id: ""
	I1213 18:43:59.268191   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.268198   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:59.268203   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:59.268267   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:59.292660   44722 cri.go:89] found id: ""
	I1213 18:43:59.292674   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.292680   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:59.292687   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:59.292746   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:59.318413   44722 cri.go:89] found id: ""
	I1213 18:43:59.318428   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.318434   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:59.318442   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:59.318453   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:59.383565   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:59.383584   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:59.394753   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:59.394770   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:59.455757   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:59.448022   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.448571   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450046   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450376   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.451813   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:59.448022   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.448571   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450046   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450376   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.451813   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:59.455767   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:59.455777   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:59.527189   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:59.527209   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:02.063131   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:02.073460   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:02.073527   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:02.100600   44722 cri.go:89] found id: ""
	I1213 18:44:02.100614   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.100621   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:02.100626   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:02.100683   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:02.128484   44722 cri.go:89] found id: ""
	I1213 18:44:02.128498   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.128505   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:02.128510   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:02.128569   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:02.153979   44722 cri.go:89] found id: ""
	I1213 18:44:02.153994   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.154000   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:02.154005   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:02.154063   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:02.178950   44722 cri.go:89] found id: ""
	I1213 18:44:02.178964   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.178971   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:02.178975   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:02.179034   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:02.203560   44722 cri.go:89] found id: ""
	I1213 18:44:02.203573   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.203599   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:02.203604   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:02.203668   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:02.235040   44722 cri.go:89] found id: ""
	I1213 18:44:02.235054   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.235061   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:02.235066   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:02.235125   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:02.262563   44722 cri.go:89] found id: ""
	I1213 18:44:02.262578   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.262591   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:02.262598   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:02.262610   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:02.330429   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:02.330448   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:02.358932   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:02.358953   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:02.430089   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:02.430108   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:02.441162   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:02.441179   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:02.505804   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:02.496664   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.498082   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.499014   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500016   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500340   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:02.496664   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.498082   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.499014   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500016   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500340   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:05.006147   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:05.021965   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:05.022041   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:05.052122   44722 cri.go:89] found id: ""
	I1213 18:44:05.052138   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.052145   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:05.052152   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:05.052213   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:05.079304   44722 cri.go:89] found id: ""
	I1213 18:44:05.079318   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.079325   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:05.079330   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:05.079387   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:05.106489   44722 cri.go:89] found id: ""
	I1213 18:44:05.106502   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.106510   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:05.106515   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:05.106573   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:05.132104   44722 cri.go:89] found id: ""
	I1213 18:44:05.132118   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.132125   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:05.132130   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:05.132186   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:05.157774   44722 cri.go:89] found id: ""
	I1213 18:44:05.157789   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.157795   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:05.157800   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:05.157860   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:05.185228   44722 cri.go:89] found id: ""
	I1213 18:44:05.185241   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.185248   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:05.185254   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:05.185313   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:05.211945   44722 cri.go:89] found id: ""
	I1213 18:44:05.211959   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.211965   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:05.211973   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:05.211982   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:05.240000   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:05.240016   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:05.305313   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:05.305331   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:05.316614   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:05.316628   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:05.380462   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:05.372183   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.373062   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.374815   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.375112   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.376609   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:05.372183   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.373062   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.374815   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.375112   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.376609   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:05.380472   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:05.380482   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:07.948856   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:07.959788   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:07.959853   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:07.985640   44722 cri.go:89] found id: ""
	I1213 18:44:07.985655   44722 logs.go:282] 0 containers: []
	W1213 18:44:07.985662   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:07.985667   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:07.985735   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:08.017082   44722 cri.go:89] found id: ""
	I1213 18:44:08.017096   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.017105   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:08.017111   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:08.017176   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:08.046580   44722 cri.go:89] found id: ""
	I1213 18:44:08.046595   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.046603   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:08.046609   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:08.046678   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:08.073255   44722 cri.go:89] found id: ""
	I1213 18:44:08.073269   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.073275   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:08.073281   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:08.073342   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:08.101465   44722 cri.go:89] found id: ""
	I1213 18:44:08.101479   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.101486   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:08.101491   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:08.101560   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:08.126539   44722 cri.go:89] found id: ""
	I1213 18:44:08.126553   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.126559   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:08.126564   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:08.126624   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:08.151274   44722 cri.go:89] found id: ""
	I1213 18:44:08.151287   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.151294   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:08.151301   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:08.151311   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:08.221734   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:08.221760   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:08.234257   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:08.234274   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:08.303822   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:08.293709   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.294557   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.296695   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.297712   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.298655   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:08.293709   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.294557   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.296695   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.297712   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.298655   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:08.303834   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:08.303846   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:08.373320   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:08.373340   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:10.905140   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:10.916748   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:10.916820   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:10.944090   44722 cri.go:89] found id: ""
	I1213 18:44:10.944103   44722 logs.go:282] 0 containers: []
	W1213 18:44:10.944111   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:10.944115   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:10.944176   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:10.969154   44722 cri.go:89] found id: ""
	I1213 18:44:10.969168   44722 logs.go:282] 0 containers: []
	W1213 18:44:10.969174   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:10.969179   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:10.969237   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:10.994056   44722 cri.go:89] found id: ""
	I1213 18:44:10.994070   44722 logs.go:282] 0 containers: []
	W1213 18:44:10.994078   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:10.994082   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:10.994195   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:11.026335   44722 cri.go:89] found id: ""
	I1213 18:44:11.026349   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.026356   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:11.026362   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:11.026420   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:11.051618   44722 cri.go:89] found id: ""
	I1213 18:44:11.051632   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.051639   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:11.051644   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:11.051702   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:11.077796   44722 cri.go:89] found id: ""
	I1213 18:44:11.077811   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.077818   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:11.077824   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:11.077885   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:11.106061   44722 cri.go:89] found id: ""
	I1213 18:44:11.106082   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.106089   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:11.106096   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:11.106107   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:11.172632   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:11.164014   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.164956   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.166552   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.167108   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.168668   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:11.164014   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.164956   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.166552   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.167108   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.168668   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:11.172644   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:11.172654   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:11.241474   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:11.241492   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:11.270376   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:11.270394   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:11.335341   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:11.335360   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:13.846544   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:13.858154   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:13.858216   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:13.891714   44722 cri.go:89] found id: ""
	I1213 18:44:13.891728   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.891735   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:13.891740   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:13.891796   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:13.917089   44722 cri.go:89] found id: ""
	I1213 18:44:13.917103   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.917110   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:13.917115   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:13.917175   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:13.942618   44722 cri.go:89] found id: ""
	I1213 18:44:13.942637   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.942644   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:13.942654   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:13.942717   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:13.972824   44722 cri.go:89] found id: ""
	I1213 18:44:13.972837   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.972844   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:13.972850   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:13.972911   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:14.002454   44722 cri.go:89] found id: ""
	I1213 18:44:14.002478   44722 logs.go:282] 0 containers: []
	W1213 18:44:14.002507   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:14.002515   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:14.002584   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:14.029621   44722 cri.go:89] found id: ""
	I1213 18:44:14.029635   44722 logs.go:282] 0 containers: []
	W1213 18:44:14.029642   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:14.029647   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:14.029705   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:14.059348   44722 cri.go:89] found id: ""
	I1213 18:44:14.059361   44722 logs.go:282] 0 containers: []
	W1213 18:44:14.059368   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:14.059376   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:14.059386   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:14.089028   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:14.089044   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:14.154770   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:14.154787   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:14.165718   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:14.165733   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:14.229870   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:14.221572   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.222738   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.223785   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.224389   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.225986   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:14.221572   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.222738   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.223785   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.224389   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.225986   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:14.229881   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:14.229893   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:16.799799   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:16.810049   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:16.810109   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:16.841177   44722 cri.go:89] found id: ""
	I1213 18:44:16.841190   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.841197   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:16.841202   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:16.841258   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:16.867562   44722 cri.go:89] found id: ""
	I1213 18:44:16.867576   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.867583   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:16.867588   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:16.867647   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:16.894362   44722 cri.go:89] found id: ""
	I1213 18:44:16.894376   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.894383   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:16.894388   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:16.894449   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:16.922192   44722 cri.go:89] found id: ""
	I1213 18:44:16.922205   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.922212   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:16.922217   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:16.922274   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:16.947061   44722 cri.go:89] found id: ""
	I1213 18:44:16.947081   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.947088   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:16.947093   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:16.947151   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:16.973311   44722 cri.go:89] found id: ""
	I1213 18:44:16.973337   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.973345   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:16.973349   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:16.973409   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:17.002040   44722 cri.go:89] found id: ""
	I1213 18:44:17.002056   44722 logs.go:282] 0 containers: []
	W1213 18:44:17.002077   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:17.002086   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:17.002097   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:17.070995   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:17.062754   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.063352   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.064945   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.065473   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.066944   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:17.062754   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.063352   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.064945   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.065473   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.066944   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:17.071005   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:17.071015   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:17.142450   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:17.142467   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:17.174618   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:17.174636   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:17.245843   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:17.245861   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:19.758316   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:19.768061   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:19.768139   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:19.793023   44722 cri.go:89] found id: ""
	I1213 18:44:19.793037   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.793044   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:19.793049   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:19.793113   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:19.817629   44722 cri.go:89] found id: ""
	I1213 18:44:19.817643   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.817649   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:19.817654   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:19.817710   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:19.851145   44722 cri.go:89] found id: ""
	I1213 18:44:19.851159   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.851166   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:19.851170   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:19.851234   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:19.881252   44722 cri.go:89] found id: ""
	I1213 18:44:19.881265   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.881272   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:19.881277   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:19.881339   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:19.912741   44722 cri.go:89] found id: ""
	I1213 18:44:19.912754   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.912761   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:19.912766   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:19.912823   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:19.940085   44722 cri.go:89] found id: ""
	I1213 18:44:19.940098   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.940105   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:19.940110   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:19.940168   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:19.967047   44722 cri.go:89] found id: ""
	I1213 18:44:19.967061   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.967067   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:19.967081   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:19.967092   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:20.039016   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:20.039038   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:20.052809   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:20.052826   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:20.124568   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:20.115906   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.116315   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118019   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118655   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.120394   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:20.115906   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.116315   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118019   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118655   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.120394   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:20.124579   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:20.124595   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:20.192989   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:20.193017   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:22.722315   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:22.732622   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:22.732684   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:22.757530   44722 cri.go:89] found id: ""
	I1213 18:44:22.757544   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.757551   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:22.757556   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:22.757614   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:22.783868   44722 cri.go:89] found id: ""
	I1213 18:44:22.783891   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.783899   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:22.783906   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:22.783973   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:22.809581   44722 cri.go:89] found id: ""
	I1213 18:44:22.809602   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.809610   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:22.809615   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:22.809676   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:22.844651   44722 cri.go:89] found id: ""
	I1213 18:44:22.844665   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.844672   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:22.844677   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:22.844734   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:22.878207   44722 cri.go:89] found id: ""
	I1213 18:44:22.878221   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.878228   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:22.878233   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:22.878291   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:22.909295   44722 cri.go:89] found id: ""
	I1213 18:44:22.909309   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.909316   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:22.909322   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:22.909382   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:22.936178   44722 cri.go:89] found id: ""
	I1213 18:44:22.936191   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.936207   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:22.936215   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:22.936225   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:23.005296   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:22.992378   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.993185   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.994804   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.995396   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.997070   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:22.992378   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.993185   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.994804   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.995396   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.997070   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:23.005308   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:23.005319   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:23.079778   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:23.079797   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:23.109955   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:23.109982   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:23.176235   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:23.176252   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:25.689578   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:25.699921   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:25.699979   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:25.723877   44722 cri.go:89] found id: ""
	I1213 18:44:25.723891   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.723898   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:25.723902   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:25.723959   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:25.749128   44722 cri.go:89] found id: ""
	I1213 18:44:25.749142   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.749148   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:25.749153   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:25.749209   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:25.773791   44722 cri.go:89] found id: ""
	I1213 18:44:25.773811   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.773818   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:25.773823   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:25.773881   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:25.799904   44722 cri.go:89] found id: ""
	I1213 18:44:25.799917   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.799924   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:25.799929   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:25.799988   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:25.825978   44722 cri.go:89] found id: ""
	I1213 18:44:25.825992   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.825999   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:25.826004   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:25.826061   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:25.861824   44722 cri.go:89] found id: ""
	I1213 18:44:25.861838   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.861854   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:25.861860   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:25.861917   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:25.899196   44722 cri.go:89] found id: ""
	I1213 18:44:25.899209   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.899227   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:25.899235   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:25.899245   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:25.962230   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:25.953208   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.953997   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.955726   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.956332   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.957845   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:25.953208   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.953997   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.955726   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.956332   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.957845   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:25.962249   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:25.962260   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:26.029250   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:26.029269   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:26.058026   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:26.058045   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:26.126957   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:26.126975   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:28.638630   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:28.649197   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:28.649261   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:28.678140   44722 cri.go:89] found id: ""
	I1213 18:44:28.678155   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.678162   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:28.678166   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:28.678225   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:28.704240   44722 cri.go:89] found id: ""
	I1213 18:44:28.704253   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.704266   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:28.704271   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:28.704332   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:28.729471   44722 cri.go:89] found id: ""
	I1213 18:44:28.729484   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.729492   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:28.729499   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:28.729560   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:28.755384   44722 cri.go:89] found id: ""
	I1213 18:44:28.755397   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.755404   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:28.755419   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:28.755527   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:28.780729   44722 cri.go:89] found id: ""
	I1213 18:44:28.780742   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.780749   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:28.780754   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:28.780819   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:28.807414   44722 cri.go:89] found id: ""
	I1213 18:44:28.807428   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.807434   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:28.807439   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:28.807495   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:28.834478   44722 cri.go:89] found id: ""
	I1213 18:44:28.834492   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.834501   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:28.834509   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:28.834519   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:28.928552   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:28.919277   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.920155   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.921759   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.922310   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.923982   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:28.919277   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.920155   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.921759   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.922310   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.923982   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:28.928563   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:28.928572   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:28.998427   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:28.998448   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:29.028696   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:29.028713   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:29.094175   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:29.094194   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:31.605517   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:31.616232   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:31.616297   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:31.642711   44722 cri.go:89] found id: ""
	I1213 18:44:31.642725   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.642733   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:31.642738   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:31.642796   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:31.669186   44722 cri.go:89] found id: ""
	I1213 18:44:31.669201   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.669208   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:31.669212   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:31.669271   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:31.696754   44722 cri.go:89] found id: ""
	I1213 18:44:31.696768   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.696775   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:31.696780   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:31.696840   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:31.722602   44722 cri.go:89] found id: ""
	I1213 18:44:31.722616   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.722623   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:31.722628   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:31.722687   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:31.749280   44722 cri.go:89] found id: ""
	I1213 18:44:31.749294   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.749302   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:31.749307   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:31.749386   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:31.774452   44722 cri.go:89] found id: ""
	I1213 18:44:31.774466   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.774473   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:31.774478   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:31.774536   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:31.804250   44722 cri.go:89] found id: ""
	I1213 18:44:31.804264   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.804271   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:31.804278   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:31.804288   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:31.876057   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:31.876075   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:31.887830   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:31.887845   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:31.956181   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:31.947856   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.948537   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950179   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950675   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.952236   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:31.947856   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.948537   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950179   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950675   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.952236   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:31.956191   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:31.956202   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:32.025697   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:32.025716   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:34.558938   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:34.569025   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:34.569094   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:34.598446   44722 cri.go:89] found id: ""
	I1213 18:44:34.598459   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.598466   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:34.598470   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:34.598537   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:34.624087   44722 cri.go:89] found id: ""
	I1213 18:44:34.624105   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.624132   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:34.624137   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:34.624204   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:34.649175   44722 cri.go:89] found id: ""
	I1213 18:44:34.649189   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.649196   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:34.649201   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:34.649257   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:34.679802   44722 cri.go:89] found id: ""
	I1213 18:44:34.679816   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.679823   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:34.679828   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:34.679886   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:34.706842   44722 cri.go:89] found id: ""
	I1213 18:44:34.706856   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.706863   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:34.706868   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:34.706928   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:34.732851   44722 cri.go:89] found id: ""
	I1213 18:44:34.732878   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.732885   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:34.732906   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:34.732972   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:34.758491   44722 cri.go:89] found id: ""
	I1213 18:44:34.758504   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.758511   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:34.758520   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:34.758530   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:34.831184   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:34.831212   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:34.854446   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:34.854463   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:34.939932   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:34.930787   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.931550   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.933427   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.934090   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.935671   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:34.930787   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.931550   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.933427   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.934090   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.935671   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:34.939943   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:34.939953   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:35.008351   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:35.008373   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:37.538092   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:37.548372   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:37.548433   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:37.576028   44722 cri.go:89] found id: ""
	I1213 18:44:37.576042   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.576049   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:37.576054   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:37.576116   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:37.601240   44722 cri.go:89] found id: ""
	I1213 18:44:37.601264   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.601272   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:37.601277   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:37.601354   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:37.629739   44722 cri.go:89] found id: ""
	I1213 18:44:37.629752   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.629759   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:37.629764   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:37.629821   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:37.659547   44722 cri.go:89] found id: ""
	I1213 18:44:37.659560   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.659567   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:37.659582   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:37.659639   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:37.687820   44722 cri.go:89] found id: ""
	I1213 18:44:37.687833   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.687841   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:37.687846   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:37.687913   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:37.713950   44722 cri.go:89] found id: ""
	I1213 18:44:37.713964   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.713971   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:37.713976   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:37.714035   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:37.739532   44722 cri.go:89] found id: ""
	I1213 18:44:37.739557   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.739564   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:37.739572   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:37.739588   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:37.769815   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:37.769831   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:37.842765   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:37.842782   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:37.856389   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:37.856405   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:37.939080   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:37.930901   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.931464   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933144   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933671   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.935120   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:37.930901   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.931464   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933144   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933671   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.935120   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:37.939091   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:37.939101   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:40.510055   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:40.520003   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:40.520078   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:40.546166   44722 cri.go:89] found id: ""
	I1213 18:44:40.546181   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.546187   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:40.546193   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:40.546255   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:40.575492   44722 cri.go:89] found id: ""
	I1213 18:44:40.575506   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.575512   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:40.575517   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:40.575572   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:40.604021   44722 cri.go:89] found id: ""
	I1213 18:44:40.604034   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.604042   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:40.604047   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:40.604103   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:40.634511   44722 cri.go:89] found id: ""
	I1213 18:44:40.634525   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.634533   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:40.634537   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:40.634597   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:40.659233   44722 cri.go:89] found id: ""
	I1213 18:44:40.659255   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.659263   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:40.659268   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:40.659327   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:40.684289   44722 cri.go:89] found id: ""
	I1213 18:44:40.684314   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.684321   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:40.684326   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:40.684401   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:40.716236   44722 cri.go:89] found id: ""
	I1213 18:44:40.716250   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.716258   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:40.716265   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:40.716277   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:40.743946   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:40.743962   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:40.809441   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:40.809459   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:40.820434   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:40.820458   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:40.906406   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:40.898049   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.898672   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900282   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900803   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.902445   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:40.898049   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.898672   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900282   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900803   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.902445   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:40.906416   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:40.906426   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:43.474264   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:43.484255   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:43.484319   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:43.511963   44722 cri.go:89] found id: ""
	I1213 18:44:43.511977   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.511984   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:43.511989   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:43.512049   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:43.537311   44722 cri.go:89] found id: ""
	I1213 18:44:43.537332   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.537339   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:43.537343   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:43.537433   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:43.564197   44722 cri.go:89] found id: ""
	I1213 18:44:43.564211   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.564218   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:43.564222   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:43.564278   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:43.590140   44722 cri.go:89] found id: ""
	I1213 18:44:43.590154   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.590160   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:43.590166   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:43.590226   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:43.615885   44722 cri.go:89] found id: ""
	I1213 18:44:43.615900   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.615916   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:43.615921   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:43.615987   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:43.640848   44722 cri.go:89] found id: ""
	I1213 18:44:43.640862   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.640868   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:43.640873   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:43.640931   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:43.665363   44722 cri.go:89] found id: ""
	I1213 18:44:43.665377   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.665384   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:43.665391   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:43.665403   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:43.676205   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:43.676227   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:43.739640   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:43.731228   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.732007   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.733627   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.734165   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.735773   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:43.731228   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.732007   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.733627   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.734165   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.735773   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:43.739650   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:43.739661   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:43.807987   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:43.808008   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:43.851586   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:43.851601   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:46.426151   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:46.436240   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:46.436307   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:46.469030   44722 cri.go:89] found id: ""
	I1213 18:44:46.469044   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.469051   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:46.469056   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:46.469115   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:46.494555   44722 cri.go:89] found id: ""
	I1213 18:44:46.494568   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.494575   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:46.494580   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:46.494638   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:46.519291   44722 cri.go:89] found id: ""
	I1213 18:44:46.519305   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.519312   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:46.519316   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:46.519371   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:46.547775   44722 cri.go:89] found id: ""
	I1213 18:44:46.547790   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.547797   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:46.547802   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:46.547860   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:46.572951   44722 cri.go:89] found id: ""
	I1213 18:44:46.572965   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.572972   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:46.572978   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:46.573096   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:46.598953   44722 cri.go:89] found id: ""
	I1213 18:44:46.598967   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.598973   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:46.598979   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:46.599036   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:46.624426   44722 cri.go:89] found id: ""
	I1213 18:44:46.624440   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.624447   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:46.624454   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:46.624465   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:46.656272   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:46.656289   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:46.720505   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:46.720523   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:46.731422   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:46.731438   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:46.794954   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:46.786465   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.786956   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.788689   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.789067   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.790678   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:46.786465   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.786956   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.788689   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.789067   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.790678   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:46.794964   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:46.794974   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:49.368713   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:49.379093   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:49.379150   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:49.404638   44722 cri.go:89] found id: ""
	I1213 18:44:49.404652   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.404670   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:49.404676   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:49.404743   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:49.432165   44722 cri.go:89] found id: ""
	I1213 18:44:49.432185   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.432192   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:49.432203   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:49.432274   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:49.457580   44722 cri.go:89] found id: ""
	I1213 18:44:49.457594   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.457601   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:49.457605   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:49.457661   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:49.482518   44722 cri.go:89] found id: ""
	I1213 18:44:49.482531   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.482539   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:49.482544   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:49.482604   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:49.508421   44722 cri.go:89] found id: ""
	I1213 18:44:49.508435   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.508442   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:49.508447   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:49.508505   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:49.533273   44722 cri.go:89] found id: ""
	I1213 18:44:49.533286   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.533293   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:49.533298   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:49.533363   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:49.559407   44722 cri.go:89] found id: ""
	I1213 18:44:49.559421   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.559428   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:49.559436   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:49.559447   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:49.586863   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:49.586880   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:49.655301   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:49.655318   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:49.666641   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:49.666657   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:49.731547   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:49.723390   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.723925   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.725596   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.726135   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.727809   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:49.723390   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.723925   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.725596   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.726135   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.727809   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:49.731558   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:49.731569   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:52.302228   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:52.312354   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:52.312414   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:52.339337   44722 cri.go:89] found id: ""
	I1213 18:44:52.339351   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.339358   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:52.339363   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:52.339428   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:52.364722   44722 cri.go:89] found id: ""
	I1213 18:44:52.364736   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.364744   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:52.364748   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:52.364807   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:52.392869   44722 cri.go:89] found id: ""
	I1213 18:44:52.392883   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.392889   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:52.392894   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:52.392952   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:52.420101   44722 cri.go:89] found id: ""
	I1213 18:44:52.420115   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.420122   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:52.420126   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:52.420186   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:52.444708   44722 cri.go:89] found id: ""
	I1213 18:44:52.444721   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.444728   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:52.444733   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:52.444789   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:52.470027   44722 cri.go:89] found id: ""
	I1213 18:44:52.470041   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.470048   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:52.470053   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:52.470112   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:52.494761   44722 cri.go:89] found id: ""
	I1213 18:44:52.494775   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.494782   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:52.494789   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:52.494799   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:52.563435   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:52.563455   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:52.597529   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:52.597545   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:52.667889   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:52.667909   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:52.679020   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:52.679036   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:52.744141   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:52.735527   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.736263   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738012   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738630   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.740366   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:52.735527   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.736263   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738012   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738630   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.740366   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:55.245804   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:55.256306   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:55.256370   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:55.283000   44722 cri.go:89] found id: ""
	I1213 18:44:55.283013   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.283020   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:55.283025   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:55.283082   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:55.313671   44722 cri.go:89] found id: ""
	I1213 18:44:55.313684   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.313690   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:55.313695   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:55.313755   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:55.342037   44722 cri.go:89] found id: ""
	I1213 18:44:55.342051   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.342059   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:55.342064   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:55.342127   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:55.367525   44722 cri.go:89] found id: ""
	I1213 18:44:55.367538   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.367557   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:55.367562   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:55.367628   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:55.393243   44722 cri.go:89] found id: ""
	I1213 18:44:55.393257   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.393274   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:55.393280   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:55.393353   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:55.418513   44722 cri.go:89] found id: ""
	I1213 18:44:55.418527   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.418534   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:55.418539   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:55.418607   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:55.443468   44722 cri.go:89] found id: ""
	I1213 18:44:55.443483   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.443490   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:55.443500   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:55.443511   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:55.515427   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:55.507029   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.507943   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.509657   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.510148   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.511618   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:55.507029   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.507943   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.509657   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.510148   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.511618   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:55.515437   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:55.515448   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:55.586865   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:55.586885   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:55.616109   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:55.616125   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:55.685952   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:55.685972   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:58.198520   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:58.208638   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:58.208696   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:58.234480   44722 cri.go:89] found id: ""
	I1213 18:44:58.234494   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.234501   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:58.234506   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:58.234561   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:58.258261   44722 cri.go:89] found id: ""
	I1213 18:44:58.258274   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.258281   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:58.258287   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:58.258358   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:58.282891   44722 cri.go:89] found id: ""
	I1213 18:44:58.282904   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.282911   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:58.282916   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:58.282971   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:58.315746   44722 cri.go:89] found id: ""
	I1213 18:44:58.315760   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.315766   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:58.315771   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:58.315830   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:58.340701   44722 cri.go:89] found id: ""
	I1213 18:44:58.340714   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.340721   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:58.340726   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:58.340792   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:58.369974   44722 cri.go:89] found id: ""
	I1213 18:44:58.369987   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.369994   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:58.369998   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:58.370063   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:58.398903   44722 cri.go:89] found id: ""
	I1213 18:44:58.398917   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.398924   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:58.398932   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:58.398945   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:58.468133   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:58.468153   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:58.495769   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:58.495787   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:58.562032   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:58.562052   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:58.573192   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:58.573208   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:58.639058   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:58.631176   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.631711   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633329   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633843   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.635281   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:58.631176   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.631711   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633329   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633843   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.635281   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:01.139326   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:01.150701   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:01.150773   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:01.180572   44722 cri.go:89] found id: ""
	I1213 18:45:01.180597   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.180627   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:01.180632   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:01.180723   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:01.210001   44722 cri.go:89] found id: ""
	I1213 18:45:01.210027   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.210035   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:01.210040   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:01.210144   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:01.240388   44722 cri.go:89] found id: ""
	I1213 18:45:01.240411   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.240419   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:01.240425   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:01.240500   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:01.270469   44722 cri.go:89] found id: ""
	I1213 18:45:01.270485   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.270492   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:01.270498   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:01.270560   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:01.298917   44722 cri.go:89] found id: ""
	I1213 18:45:01.298932   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.298950   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:01.298956   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:01.299047   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:01.326174   44722 cri.go:89] found id: ""
	I1213 18:45:01.326188   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.326195   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:01.326200   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:01.326260   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:01.355316   44722 cri.go:89] found id: ""
	I1213 18:45:01.355331   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.355339   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:01.355348   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:01.355360   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:01.431176   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:01.431206   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:01.443676   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:01.443695   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:01.512045   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:01.503556   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.504288   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506017   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506375   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.508015   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:01.503556   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.504288   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506017   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506375   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.508015   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:01.512056   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:01.512066   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:01.581540   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:01.581560   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:04.113152   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:04.126133   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:04.126190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:04.157022   44722 cri.go:89] found id: ""
	I1213 18:45:04.157037   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.157044   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:04.157050   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:04.157111   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:04.184060   44722 cri.go:89] found id: ""
	I1213 18:45:04.184073   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.184080   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:04.184085   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:04.184144   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:04.210310   44722 cri.go:89] found id: ""
	I1213 18:45:04.210323   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.210330   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:04.210336   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:04.210398   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:04.236685   44722 cri.go:89] found id: ""
	I1213 18:45:04.236700   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.236707   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:04.236712   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:04.236771   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:04.265948   44722 cri.go:89] found id: ""
	I1213 18:45:04.265961   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.265968   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:04.265973   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:04.266029   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:04.291029   44722 cri.go:89] found id: ""
	I1213 18:45:04.291042   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.291049   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:04.291065   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:04.291122   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:04.316748   44722 cri.go:89] found id: ""
	I1213 18:45:04.316762   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.316768   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:04.316787   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:04.316798   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:04.380978   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:04.380996   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:04.392325   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:04.392342   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:04.459627   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:04.451449   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.452151   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.453706   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.454141   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.455629   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:04.451449   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.452151   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.453706   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.454141   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.455629   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:04.459637   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:04.459648   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:04.527567   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:04.527587   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:07.060097   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:07.070755   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:07.070814   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:07.098777   44722 cri.go:89] found id: ""
	I1213 18:45:07.098790   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.098797   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:07.098802   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:07.098863   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:07.126857   44722 cri.go:89] found id: ""
	I1213 18:45:07.126870   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.126877   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:07.126882   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:07.126938   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:07.154665   44722 cri.go:89] found id: ""
	I1213 18:45:07.154679   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.154686   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:07.154691   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:07.154751   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:07.183998   44722 cri.go:89] found id: ""
	I1213 18:45:07.184011   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.184018   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:07.184023   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:07.184079   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:07.209217   44722 cri.go:89] found id: ""
	I1213 18:45:07.209230   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.209238   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:07.209249   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:07.209309   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:07.238297   44722 cri.go:89] found id: ""
	I1213 18:45:07.238321   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.238328   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:07.238333   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:07.238392   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:07.268115   44722 cri.go:89] found id: ""
	I1213 18:45:07.268130   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.268136   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:07.268144   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:07.268156   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:07.337456   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:07.337475   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:07.365283   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:07.365299   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:07.433864   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:07.433882   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:07.445039   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:07.445055   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:07.509195   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:07.500621   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.500993   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.502681   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.503001   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.504545   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:07.500621   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.500993   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.502681   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.503001   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.504545   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:10.010342   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:10.026847   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:10.026923   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:10.055758   44722 cri.go:89] found id: ""
	I1213 18:45:10.055773   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.055781   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:10.055786   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:10.055847   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:10.084492   44722 cri.go:89] found id: ""
	I1213 18:45:10.084508   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.084515   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:10.084521   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:10.084579   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:10.124733   44722 cri.go:89] found id: ""
	I1213 18:45:10.124748   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.124756   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:10.124760   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:10.124823   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:10.167562   44722 cri.go:89] found id: ""
	I1213 18:45:10.167575   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.167583   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:10.167588   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:10.167647   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:10.196162   44722 cri.go:89] found id: ""
	I1213 18:45:10.196178   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.196185   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:10.196190   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:10.196251   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:10.222349   44722 cri.go:89] found id: ""
	I1213 18:45:10.222362   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.222370   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:10.222375   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:10.222433   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:10.252822   44722 cri.go:89] found id: ""
	I1213 18:45:10.252838   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.252848   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:10.252856   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:10.252867   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:10.318555   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:10.318574   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:10.330833   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:10.330848   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:10.403119   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:10.391784   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.392505   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394095   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394656   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.396739   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:10.391784   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.392505   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394095   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394656   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.396739   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:10.403129   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:10.403139   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:10.476776   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:10.476796   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:13.006030   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:13.016994   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:13.017078   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:13.047302   44722 cri.go:89] found id: ""
	I1213 18:45:13.047316   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.047322   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:13.047327   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:13.047390   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:13.072990   44722 cri.go:89] found id: ""
	I1213 18:45:13.073014   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.073024   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:13.073029   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:13.073086   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:13.104144   44722 cri.go:89] found id: ""
	I1213 18:45:13.104158   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.104165   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:13.104169   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:13.104233   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:13.133122   44722 cri.go:89] found id: ""
	I1213 18:45:13.133135   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.133141   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:13.133147   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:13.133228   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:13.165373   44722 cri.go:89] found id: ""
	I1213 18:45:13.165399   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.165406   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:13.165411   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:13.165473   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:13.191991   44722 cri.go:89] found id: ""
	I1213 18:45:13.192004   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.192012   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:13.192017   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:13.192082   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:13.217774   44722 cri.go:89] found id: ""
	I1213 18:45:13.217788   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.217795   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:13.217802   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:13.217813   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:13.284517   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:13.275477   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.276368   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278192   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278786   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.280431   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:13.275477   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.276368   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278192   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278786   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.280431   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:13.284527   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:13.284538   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:13.353730   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:13.353749   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:13.384210   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:13.384225   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:13.452832   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:13.452849   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:15.964206   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:15.976388   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:15.976453   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:16.006122   44722 cri.go:89] found id: ""
	I1213 18:45:16.006136   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.006143   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:16.006149   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:16.006211   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:16.031686   44722 cri.go:89] found id: ""
	I1213 18:45:16.031700   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.031707   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:16.031712   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:16.031768   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:16.057702   44722 cri.go:89] found id: ""
	I1213 18:45:16.057715   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.057722   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:16.057728   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:16.057783   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:16.090888   44722 cri.go:89] found id: ""
	I1213 18:45:16.090913   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.090921   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:16.090927   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:16.090997   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:16.128051   44722 cri.go:89] found id: ""
	I1213 18:45:16.128075   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.128083   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:16.128089   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:16.128160   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:16.157962   44722 cri.go:89] found id: ""
	I1213 18:45:16.157986   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.157993   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:16.157999   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:16.158057   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:16.184049   44722 cri.go:89] found id: ""
	I1213 18:45:16.184063   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.184070   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:16.184077   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:16.184088   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:16.250129   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:16.250149   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:16.261107   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:16.261125   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:16.330408   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:16.321894   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.322673   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324350   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324661   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.326266   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:16.321894   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.322673   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324350   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324661   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.326266   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:16.330418   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:16.330428   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:16.398576   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:16.398594   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:18.928496   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:18.938797   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:18.938873   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:18.964909   44722 cri.go:89] found id: ""
	I1213 18:45:18.964924   44722 logs.go:282] 0 containers: []
	W1213 18:45:18.964932   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:18.964939   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:18.964999   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:18.991414   44722 cri.go:89] found id: ""
	I1213 18:45:18.991428   44722 logs.go:282] 0 containers: []
	W1213 18:45:18.991446   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:18.991451   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:18.991508   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:19.021961   44722 cri.go:89] found id: ""
	I1213 18:45:19.021976   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.021983   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:19.021988   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:19.022055   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:19.046931   44722 cri.go:89] found id: ""
	I1213 18:45:19.046945   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.046952   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:19.046957   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:19.047013   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:19.072683   44722 cri.go:89] found id: ""
	I1213 18:45:19.072696   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.072703   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:19.072708   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:19.072778   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:19.100627   44722 cri.go:89] found id: ""
	I1213 18:45:19.100643   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.100651   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:19.100656   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:19.100720   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:19.130142   44722 cri.go:89] found id: ""
	I1213 18:45:19.130157   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.130163   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:19.130171   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:19.130182   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:19.197474   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:19.197494   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:19.208889   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:19.208908   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:19.274541   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:19.265647   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.266238   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.267928   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.268736   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.270556   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:19.265647   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.266238   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.267928   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.268736   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.270556   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:19.274551   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:19.274561   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:19.342919   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:19.342938   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:21.872871   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:21.883492   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:21.883550   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:21.910011   44722 cri.go:89] found id: ""
	I1213 18:45:21.910025   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.910032   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:21.910037   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:21.910094   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:21.935440   44722 cri.go:89] found id: ""
	I1213 18:45:21.935454   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.935461   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:21.935476   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:21.935535   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:21.970166   44722 cri.go:89] found id: ""
	I1213 18:45:21.970181   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.970188   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:21.970193   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:21.970254   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:21.996521   44722 cri.go:89] found id: ""
	I1213 18:45:21.996544   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.996552   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:21.996557   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:21.996625   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:22.026015   44722 cri.go:89] found id: ""
	I1213 18:45:22.026030   44722 logs.go:282] 0 containers: []
	W1213 18:45:22.026048   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:22.026054   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:22.026136   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:22.052512   44722 cri.go:89] found id: ""
	I1213 18:45:22.052526   44722 logs.go:282] 0 containers: []
	W1213 18:45:22.052533   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:22.052547   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:22.052634   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:22.087211   44722 cri.go:89] found id: ""
	I1213 18:45:22.087242   44722 logs.go:282] 0 containers: []
	W1213 18:45:22.087249   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:22.087258   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:22.087268   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:22.161238   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:22.161256   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:22.172311   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:22.172327   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:22.235337   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:22.226748   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.227404   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229399   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229780   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.231333   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:22.226748   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.227404   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229399   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229780   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.231333   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:22.235349   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:22.235360   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:22.304771   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:22.304790   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:24.834025   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:24.844561   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:24.844623   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:24.869497   44722 cri.go:89] found id: ""
	I1213 18:45:24.869512   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.869519   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:24.869524   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:24.869582   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:24.899663   44722 cri.go:89] found id: ""
	I1213 18:45:24.899677   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.899685   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:24.899690   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:24.899750   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:24.929664   44722 cri.go:89] found id: ""
	I1213 18:45:24.929678   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.929685   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:24.929689   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:24.929748   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:24.954943   44722 cri.go:89] found id: ""
	I1213 18:45:24.954957   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.954964   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:24.954969   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:24.955024   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:24.981964   44722 cri.go:89] found id: ""
	I1213 18:45:24.981978   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.981985   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:24.981991   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:24.982048   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:25.024491   44722 cri.go:89] found id: ""
	I1213 18:45:25.024507   44722 logs.go:282] 0 containers: []
	W1213 18:45:25.024514   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:25.024519   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:25.024587   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:25.059717   44722 cri.go:89] found id: ""
	I1213 18:45:25.059732   44722 logs.go:282] 0 containers: []
	W1213 18:45:25.059740   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:25.059747   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:25.059758   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:25.137684   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:25.137709   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:25.152450   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:25.152466   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:25.224073   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:25.215282   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.215897   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.217852   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.218715   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.219908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:25.215282   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.215897   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.217852   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.218715   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.219908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:25.224083   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:25.224095   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:25.293145   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:25.293164   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:27.825368   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:27.835872   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:27.835932   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:27.861658   44722 cri.go:89] found id: ""
	I1213 18:45:27.861672   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.861679   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:27.861684   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:27.861742   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:27.886615   44722 cri.go:89] found id: ""
	I1213 18:45:27.886629   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.886636   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:27.886641   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:27.886697   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:27.915655   44722 cri.go:89] found id: ""
	I1213 18:45:27.915669   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.915676   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:27.915681   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:27.915743   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:27.940463   44722 cri.go:89] found id: ""
	I1213 18:45:27.940477   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.940484   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:27.940489   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:27.940546   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:27.970042   44722 cri.go:89] found id: ""
	I1213 18:45:27.970056   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.970063   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:27.970068   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:27.970125   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:27.996687   44722 cri.go:89] found id: ""
	I1213 18:45:27.996702   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.996708   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:27.996714   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:27.996773   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:28.025848   44722 cri.go:89] found id: ""
	I1213 18:45:28.025861   44722 logs.go:282] 0 containers: []
	W1213 18:45:28.025868   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:28.025876   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:28.025894   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:28.104265   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:28.104292   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:28.116838   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:28.116855   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:28.189318   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:28.180911   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.181676   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.183358   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.184009   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.185382   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:28.180911   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.181676   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.183358   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.184009   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.185382   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:28.189329   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:28.189340   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:28.257409   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:28.257428   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:30.789289   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:30.799688   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:30.799748   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:30.828658   44722 cri.go:89] found id: ""
	I1213 18:45:30.828672   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.828680   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:30.828688   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:30.828748   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:30.854242   44722 cri.go:89] found id: ""
	I1213 18:45:30.854256   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.854263   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:30.854268   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:30.854325   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:30.879211   44722 cri.go:89] found id: ""
	I1213 18:45:30.879225   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.879235   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:30.879241   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:30.879298   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:30.908380   44722 cri.go:89] found id: ""
	I1213 18:45:30.908394   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.908401   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:30.908406   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:30.908462   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:30.934004   44722 cri.go:89] found id: ""
	I1213 18:45:30.934023   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.934030   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:30.934035   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:30.934094   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:30.959088   44722 cri.go:89] found id: ""
	I1213 18:45:30.959101   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.959108   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:30.959113   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:30.959172   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:30.987128   44722 cri.go:89] found id: ""
	I1213 18:45:30.987142   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.987149   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:30.987156   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:30.987167   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:30.999233   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:30.999253   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:31.070686   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:31.062512   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.063387   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.064956   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.065476   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.066859   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:31.062512   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.063387   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.064956   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.065476   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.066859   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:31.070697   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:31.070708   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:31.149373   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:31.149393   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:31.182467   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:31.182484   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:33.754920   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:33.764984   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:33.765061   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:33.789610   44722 cri.go:89] found id: ""
	I1213 18:45:33.789624   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.789630   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:33.789635   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:33.789694   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:33.814723   44722 cri.go:89] found id: ""
	I1213 18:45:33.814738   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.814744   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:33.814749   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:33.814811   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:33.841835   44722 cri.go:89] found id: ""
	I1213 18:45:33.841848   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.841855   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:33.841860   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:33.841917   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:33.875847   44722 cri.go:89] found id: ""
	I1213 18:45:33.875871   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.875878   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:33.875885   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:33.875953   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:33.903037   44722 cri.go:89] found id: ""
	I1213 18:45:33.903050   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.903057   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:33.903062   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:33.903135   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:33.934423   44722 cri.go:89] found id: ""
	I1213 18:45:33.934437   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.934444   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:33.934449   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:33.934522   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:33.959437   44722 cri.go:89] found id: ""
	I1213 18:45:33.959450   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.959458   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:33.959465   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:33.959475   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:34.024568   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:34.024587   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:34.036558   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:34.036583   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:34.113960   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:34.105595   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.106445   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.107646   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.108191   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.109855   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:34.105595   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.106445   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.107646   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.108191   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.109855   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:34.113970   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:34.113988   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:34.186879   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:34.186900   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:36.717771   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:36.731405   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:36.731462   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:36.758511   44722 cri.go:89] found id: ""
	I1213 18:45:36.758525   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.758532   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:36.758537   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:36.758595   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:36.784601   44722 cri.go:89] found id: ""
	I1213 18:45:36.784614   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.784621   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:36.784626   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:36.784683   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:36.813889   44722 cri.go:89] found id: ""
	I1213 18:45:36.813903   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.813910   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:36.813915   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:36.813974   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:36.840673   44722 cri.go:89] found id: ""
	I1213 18:45:36.840687   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.840695   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:36.840701   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:36.840758   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:36.866658   44722 cri.go:89] found id: ""
	I1213 18:45:36.866673   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.866679   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:36.866684   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:36.866761   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:36.893289   44722 cri.go:89] found id: ""
	I1213 18:45:36.893303   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.893311   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:36.893316   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:36.893377   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:36.920158   44722 cri.go:89] found id: ""
	I1213 18:45:36.920171   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.920178   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:36.920186   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:36.920196   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:36.987002   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:36.987021   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:36.999105   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:36.999128   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:37.072378   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:37.063848   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.064510   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066038   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066549   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.067999   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:37.063848   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.064510   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066038   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066549   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.067999   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:37.072390   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:37.072401   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:37.145027   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:37.145047   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:39.682857   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:39.693055   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:39.693114   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:39.717750   44722 cri.go:89] found id: ""
	I1213 18:45:39.717763   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.717771   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:39.717776   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:39.717831   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:39.748452   44722 cri.go:89] found id: ""
	I1213 18:45:39.748466   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.748473   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:39.748478   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:39.748535   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:39.775686   44722 cri.go:89] found id: ""
	I1213 18:45:39.775700   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.775706   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:39.775712   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:39.775773   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:39.801049   44722 cri.go:89] found id: ""
	I1213 18:45:39.801063   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.801070   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:39.801075   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:39.801132   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:39.829545   44722 cri.go:89] found id: ""
	I1213 18:45:39.829559   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.829566   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:39.829571   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:39.829627   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:39.855870   44722 cri.go:89] found id: ""
	I1213 18:45:39.855883   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.855890   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:39.855895   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:39.855951   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:39.880432   44722 cri.go:89] found id: ""
	I1213 18:45:39.880446   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.880452   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:39.880460   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:39.880471   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:39.944602   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:39.936636   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.937539   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939109   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939488   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.940927   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:39.936636   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.937539   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939109   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939488   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.940927   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:39.944613   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:39.944623   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:40.014162   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:40.014186   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:40.052762   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:40.052780   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:40.123344   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:40.123364   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:42.639745   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:42.650139   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:42.650196   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:42.674810   44722 cri.go:89] found id: ""
	I1213 18:45:42.674824   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.674831   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:42.674836   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:42.674896   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:42.705498   44722 cri.go:89] found id: ""
	I1213 18:45:42.705512   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.705519   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:42.705524   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:42.705590   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:42.731558   44722 cri.go:89] found id: ""
	I1213 18:45:42.731572   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.731586   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:42.731591   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:42.731650   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:42.758070   44722 cri.go:89] found id: ""
	I1213 18:45:42.758084   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.758098   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:42.758103   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:42.758163   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:42.784043   44722 cri.go:89] found id: ""
	I1213 18:45:42.784057   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.784065   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:42.784069   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:42.784130   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:42.810580   44722 cri.go:89] found id: ""
	I1213 18:45:42.810594   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.810602   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:42.810607   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:42.810667   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:42.837217   44722 cri.go:89] found id: ""
	I1213 18:45:42.837230   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.837237   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:42.837244   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:42.837255   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:42.869269   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:42.869289   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:42.937246   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:42.937265   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:42.948535   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:42.948551   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:43.014525   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:43.006257   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.006741   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008386   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008729   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.010279   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:43.006257   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.006741   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008386   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008729   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.010279   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:43.014550   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:43.014561   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:45.585650   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:45.596016   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:45.596081   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:45.621732   44722 cri.go:89] found id: ""
	I1213 18:45:45.621746   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.621753   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:45.621758   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:45.621828   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:45.647999   44722 cri.go:89] found id: ""
	I1213 18:45:45.648013   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.648020   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:45.648025   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:45.648084   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:45.672656   44722 cri.go:89] found id: ""
	I1213 18:45:45.672669   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.672676   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:45.672681   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:45.672737   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:45.697633   44722 cri.go:89] found id: ""
	I1213 18:45:45.697648   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.697655   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:45.697660   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:45.697725   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:45.722938   44722 cri.go:89] found id: ""
	I1213 18:45:45.722957   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.722964   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:45.722969   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:45.723027   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:45.753044   44722 cri.go:89] found id: ""
	I1213 18:45:45.753057   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.753064   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:45.753069   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:45.753139   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:45.777945   44722 cri.go:89] found id: ""
	I1213 18:45:45.777959   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.777966   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:45.777974   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:45.777984   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:45.788618   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:45.788634   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:45.856342   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:45.847135   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.847845   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.849739   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.850385   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.851966   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:45.847135   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.847845   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.849739   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.850385   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.851966   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:45.856353   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:45.856363   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:45.925928   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:45.925948   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:45.955270   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:45.955286   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:48.526489   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:48.536804   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:48.536878   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:48.564096   44722 cri.go:89] found id: ""
	I1213 18:45:48.564110   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.564116   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:48.564121   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:48.564180   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:48.589084   44722 cri.go:89] found id: ""
	I1213 18:45:48.589098   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.589105   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:48.589117   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:48.589174   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:48.614957   44722 cri.go:89] found id: ""
	I1213 18:45:48.614971   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.614978   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:48.614989   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:48.615045   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:48.639705   44722 cri.go:89] found id: ""
	I1213 18:45:48.639719   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.639725   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:48.639730   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:48.639789   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:48.665151   44722 cri.go:89] found id: ""
	I1213 18:45:48.665165   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.665171   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:48.665176   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:48.665237   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:48.691765   44722 cri.go:89] found id: ""
	I1213 18:45:48.691779   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.691786   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:48.691791   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:48.691846   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:48.718076   44722 cri.go:89] found id: ""
	I1213 18:45:48.718089   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.718096   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:48.718104   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:48.718115   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:48.729150   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:48.729166   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:48.795759   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:48.787631   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.788312   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790025   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790514   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.791993   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:48.787631   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.788312   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790025   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790514   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.791993   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:48.795769   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:48.795780   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:48.865101   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:48.865123   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:48.893317   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:48.893332   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:51.461504   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:51.471540   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:51.471603   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:51.496535   44722 cri.go:89] found id: ""
	I1213 18:45:51.496549   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.496556   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:51.496561   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:51.496620   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:51.523516   44722 cri.go:89] found id: ""
	I1213 18:45:51.523530   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.523537   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:51.523542   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:51.523601   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:51.548779   44722 cri.go:89] found id: ""
	I1213 18:45:51.548792   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.548799   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:51.548804   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:51.548862   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:51.574426   44722 cri.go:89] found id: ""
	I1213 18:45:51.574439   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.574446   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:51.574451   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:51.574508   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:51.601095   44722 cri.go:89] found id: ""
	I1213 18:45:51.601116   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.601123   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:51.601128   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:51.601185   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:51.630300   44722 cri.go:89] found id: ""
	I1213 18:45:51.630314   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.630321   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:51.630326   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:51.630388   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:51.658180   44722 cri.go:89] found id: ""
	I1213 18:45:51.658194   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.658200   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:51.658208   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:51.658218   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:51.727599   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:51.727617   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:51.740526   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:51.740543   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:51.824581   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:51.815003   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.815673   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.817551   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.818376   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.820029   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:51.815003   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.815673   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.817551   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.818376   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.820029   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:51.824598   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:51.824608   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:51.895130   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:51.895149   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:54.423725   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:54.434109   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:54.434167   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:54.461075   44722 cri.go:89] found id: ""
	I1213 18:45:54.461096   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.461104   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:54.461109   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:54.461169   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:54.486465   44722 cri.go:89] found id: ""
	I1213 18:45:54.486479   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.486485   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:54.486490   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:54.486545   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:54.512518   44722 cri.go:89] found id: ""
	I1213 18:45:54.512532   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.512539   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:54.512556   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:54.512613   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:54.539809   44722 cri.go:89] found id: ""
	I1213 18:45:54.539823   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.539830   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:54.539835   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:54.539897   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:54.570146   44722 cri.go:89] found id: ""
	I1213 18:45:54.570159   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.570166   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:54.570170   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:54.570224   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:54.596027   44722 cri.go:89] found id: ""
	I1213 18:45:54.596041   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.596047   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:54.596052   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:54.596113   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:54.623337   44722 cri.go:89] found id: ""
	I1213 18:45:54.623351   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.623358   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:54.623367   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:54.623382   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:54.654287   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:54.654305   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:54.720405   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:54.720426   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:54.731640   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:54.731656   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:54.800062   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:54.792084   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.792588   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794071   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794411   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.795882   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:54.792084   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.792588   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794071   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794411   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.795882   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:54.800085   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:54.800095   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:57.370530   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:57.381975   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:57.382044   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:57.410748   44722 cri.go:89] found id: ""
	I1213 18:45:57.410761   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.410768   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:57.410773   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:57.410834   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:57.437110   44722 cri.go:89] found id: ""
	I1213 18:45:57.437123   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.437130   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:57.437135   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:57.437196   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:57.463356   44722 cri.go:89] found id: ""
	I1213 18:45:57.463370   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.463377   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:57.463381   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:57.463436   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:57.488350   44722 cri.go:89] found id: ""
	I1213 18:45:57.488364   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.488381   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:57.488387   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:57.488442   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:57.513926   44722 cri.go:89] found id: ""
	I1213 18:45:57.513939   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.513951   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:57.513956   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:57.514013   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:57.539641   44722 cri.go:89] found id: ""
	I1213 18:45:57.539655   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.539661   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:57.539666   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:57.539722   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:57.565672   44722 cri.go:89] found id: ""
	I1213 18:45:57.565686   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.565693   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:57.565700   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:57.565710   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:57.637461   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:57.637486   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:57.648402   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:57.648418   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:57.716551   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:57.708424   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.708971   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.710676   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.711086   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.712583   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:57.708424   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.708971   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.710676   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.711086   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.712583   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:57.716567   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:57.716579   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:57.785661   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:57.785681   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:00.318382   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:00.335223   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:00.335290   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:00.415052   44722 cri.go:89] found id: ""
	I1213 18:46:00.415068   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.415075   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:00.415080   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:00.415144   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:00.448025   44722 cri.go:89] found id: ""
	I1213 18:46:00.448039   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.448047   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:00.448052   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:00.448120   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:00.478830   44722 cri.go:89] found id: ""
	I1213 18:46:00.478844   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.478851   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:00.478856   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:00.478915   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:00.510923   44722 cri.go:89] found id: ""
	I1213 18:46:00.510943   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.510951   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:00.510956   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:00.511018   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:00.538053   44722 cri.go:89] found id: ""
	I1213 18:46:00.538068   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.538075   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:00.538080   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:00.538139   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:00.563080   44722 cri.go:89] found id: ""
	I1213 18:46:00.563094   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.563101   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:00.563107   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:00.563162   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:00.588696   44722 cri.go:89] found id: ""
	I1213 18:46:00.588710   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.588716   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:00.588724   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:00.588734   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:00.655165   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:00.655185   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:00.667201   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:00.667217   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:00.732035   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:00.723385   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.723987   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.725839   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.726393   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.728162   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:00.723385   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.723987   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.725839   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.726393   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.728162   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:00.732045   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:00.732055   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:00.803574   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:00.803592   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:03.335736   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:03.347198   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:03.347266   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:03.376587   44722 cri.go:89] found id: ""
	I1213 18:46:03.376600   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.376625   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:03.376630   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:03.376698   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:03.407284   44722 cri.go:89] found id: ""
	I1213 18:46:03.407298   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.407305   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:03.407310   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:03.407379   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:03.432194   44722 cri.go:89] found id: ""
	I1213 18:46:03.432219   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.432226   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:03.432231   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:03.432297   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:03.461490   44722 cri.go:89] found id: ""
	I1213 18:46:03.461504   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.461520   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:03.461528   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:03.461586   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:03.486500   44722 cri.go:89] found id: ""
	I1213 18:46:03.486514   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.486521   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:03.486526   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:03.486580   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:03.516064   44722 cri.go:89] found id: ""
	I1213 18:46:03.516079   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.516095   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:03.516101   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:03.516173   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:03.543241   44722 cri.go:89] found id: ""
	I1213 18:46:03.543261   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.543269   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:03.543277   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:03.543288   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:03.614698   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:03.606014   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.606848   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.608572   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.609328   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.610814   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:03.606014   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.606848   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.608572   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.609328   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.610814   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:03.614708   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:03.614719   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:03.683610   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:03.683629   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:03.714101   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:03.714118   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:03.783821   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:03.783841   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:06.296661   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:06.307402   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:06.307473   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:06.342139   44722 cri.go:89] found id: ""
	I1213 18:46:06.342152   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.342159   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:06.342164   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:06.342223   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:06.376710   44722 cri.go:89] found id: ""
	I1213 18:46:06.376724   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.376730   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:06.376735   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:06.376793   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:06.412732   44722 cri.go:89] found id: ""
	I1213 18:46:06.412746   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.412753   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:06.412758   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:06.412814   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:06.445341   44722 cri.go:89] found id: ""
	I1213 18:46:06.445354   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.445360   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:06.445365   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:06.445423   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:06.470587   44722 cri.go:89] found id: ""
	I1213 18:46:06.470601   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.470608   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:06.470613   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:06.470667   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:06.495331   44722 cri.go:89] found id: ""
	I1213 18:46:06.495347   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.495354   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:06.495360   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:06.495420   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:06.521489   44722 cri.go:89] found id: ""
	I1213 18:46:06.521503   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.521510   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:06.521517   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:06.521531   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:06.552192   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:06.552209   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:06.618284   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:06.618302   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:06.630541   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:06.630558   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:06.702858   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:06.695039   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.695585   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697148   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697474   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.698996   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:06.695039   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.695585   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697148   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697474   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.698996   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:06.702868   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:06.702881   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:09.275499   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:09.285598   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:09.285657   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:09.313861   44722 cri.go:89] found id: ""
	I1213 18:46:09.313885   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.313893   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:09.313898   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:09.313956   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:09.346645   44722 cri.go:89] found id: ""
	I1213 18:46:09.346661   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.346671   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:09.346677   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:09.346742   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:09.381723   44722 cri.go:89] found id: ""
	I1213 18:46:09.381743   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.381750   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:09.381755   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:09.381842   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:09.415093   44722 cri.go:89] found id: ""
	I1213 18:46:09.415106   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.415113   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:09.415118   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:09.415178   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:09.440412   44722 cri.go:89] found id: ""
	I1213 18:46:09.440426   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.440433   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:09.440438   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:09.440495   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:09.469945   44722 cri.go:89] found id: ""
	I1213 18:46:09.469959   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.469965   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:09.469971   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:09.470037   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:09.495452   44722 cri.go:89] found id: ""
	I1213 18:46:09.495478   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.495486   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:09.495494   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:09.495505   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:09.507701   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:09.507716   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:09.577735   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:09.564499   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.564927   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571154   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571832   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.573056   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:09.564499   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.564927   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571154   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571832   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.573056   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:09.577745   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:09.577756   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:09.650543   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:09.650564   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:09.680040   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:09.680057   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:12.249315   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:12.259200   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:12.259257   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:12.284607   44722 cri.go:89] found id: ""
	I1213 18:46:12.284620   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.284627   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:12.284632   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:12.284697   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:12.318167   44722 cri.go:89] found id: ""
	I1213 18:46:12.318180   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.318187   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:12.318191   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:12.318249   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:12.361187   44722 cri.go:89] found id: ""
	I1213 18:46:12.361201   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.361208   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:12.361213   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:12.361270   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:12.396970   44722 cri.go:89] found id: ""
	I1213 18:46:12.396983   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.396990   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:12.396995   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:12.397098   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:12.423202   44722 cri.go:89] found id: ""
	I1213 18:46:12.423215   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.423222   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:12.423227   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:12.423286   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:12.448231   44722 cri.go:89] found id: ""
	I1213 18:46:12.448245   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.448252   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:12.448257   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:12.448314   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:12.477927   44722 cri.go:89] found id: ""
	I1213 18:46:12.477941   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.477949   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:12.477956   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:12.477966   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:12.547816   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:12.547834   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:12.559262   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:12.559280   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:12.622773   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:12.614428   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.615068   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.616576   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.617216   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.618857   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:12.614428   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.615068   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.616576   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.617216   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.618857   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:12.622783   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:12.622793   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:12.692295   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:12.692312   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:15.224550   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:15.235025   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:15.235085   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:15.261669   44722 cri.go:89] found id: ""
	I1213 18:46:15.261683   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.261690   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:15.261695   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:15.261755   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:15.290899   44722 cri.go:89] found id: ""
	I1213 18:46:15.290913   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.290920   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:15.290925   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:15.290979   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:15.317538   44722 cri.go:89] found id: ""
	I1213 18:46:15.317551   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.317558   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:15.317563   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:15.317621   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:15.359563   44722 cri.go:89] found id: ""
	I1213 18:46:15.359577   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.359584   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:15.359589   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:15.359645   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:15.395203   44722 cri.go:89] found id: ""
	I1213 18:46:15.395216   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.395223   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:15.395228   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:15.395288   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:15.428291   44722 cri.go:89] found id: ""
	I1213 18:46:15.428304   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.428311   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:15.428316   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:15.428372   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:15.453931   44722 cri.go:89] found id: ""
	I1213 18:46:15.453945   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.453951   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:15.453958   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:15.453969   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:15.521521   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:15.512931   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.513463   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515174   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515484   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.517840   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:15.512931   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.513463   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515174   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515484   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.517840   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:15.521531   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:15.521541   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:15.591139   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:15.591160   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:15.622465   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:15.622481   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:15.691330   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:15.691348   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:18.203416   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:18.213952   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:18.214025   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:18.239778   44722 cri.go:89] found id: ""
	I1213 18:46:18.239792   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.239808   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:18.239814   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:18.239879   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:18.264101   44722 cri.go:89] found id: ""
	I1213 18:46:18.264114   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.264121   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:18.264126   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:18.264185   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:18.289302   44722 cri.go:89] found id: ""
	I1213 18:46:18.289316   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.289323   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:18.289328   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:18.289386   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:18.316088   44722 cri.go:89] found id: ""
	I1213 18:46:18.316101   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.316108   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:18.316116   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:18.316174   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:18.351768   44722 cri.go:89] found id: ""
	I1213 18:46:18.351781   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.351788   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:18.351792   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:18.351846   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:18.382427   44722 cri.go:89] found id: ""
	I1213 18:46:18.382441   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.382447   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:18.382452   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:18.382509   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:18.410191   44722 cri.go:89] found id: ""
	I1213 18:46:18.410205   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.410212   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:18.410220   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:18.410230   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:18.473809   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:18.464747   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.465711   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467472   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467819   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.469591   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:18.464747   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.465711   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467472   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467819   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.469591   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:18.473819   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:18.473837   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:18.545360   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:18.545378   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:18.573170   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:18.573186   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:18.638179   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:18.638198   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:21.149461   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:21.159925   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:21.159987   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:21.185083   44722 cri.go:89] found id: ""
	I1213 18:46:21.185097   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.185104   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:21.185109   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:21.185169   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:21.210110   44722 cri.go:89] found id: ""
	I1213 18:46:21.210124   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.210131   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:21.210136   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:21.210199   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:21.235437   44722 cri.go:89] found id: ""
	I1213 18:46:21.235450   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.235457   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:21.235462   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:21.235518   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:21.264027   44722 cri.go:89] found id: ""
	I1213 18:46:21.264041   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.264061   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:21.264067   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:21.264134   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:21.291534   44722 cri.go:89] found id: ""
	I1213 18:46:21.291548   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.291567   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:21.291571   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:21.291638   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:21.321987   44722 cri.go:89] found id: ""
	I1213 18:46:21.322010   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.322018   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:21.322023   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:21.322088   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:21.354190   44722 cri.go:89] found id: ""
	I1213 18:46:21.354218   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.354225   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:21.354232   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:21.354242   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:21.432072   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:21.432092   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:21.443924   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:21.443941   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:21.512256   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:21.503676   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.504240   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506119   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506493   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.508024   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:21.503676   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.504240   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506119   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506493   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.508024   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:21.512269   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:21.512281   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:21.584867   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:21.584887   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:24.118323   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:24.129552   44722 kubeadm.go:602] duration metric: took 4m2.563511626s to restartPrimaryControlPlane
	W1213 18:46:24.129614   44722 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 18:46:24.129691   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 18:46:24.541036   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 18:46:24.553708   44722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 18:46:24.561742   44722 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 18:46:24.561810   44722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:46:24.569735   44722 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 18:46:24.569745   44722 kubeadm.go:158] found existing configuration files:
	
	I1213 18:46:24.569794   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:46:24.577570   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 18:46:24.577624   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 18:46:24.584990   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:46:24.592683   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 18:46:24.592744   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:46:24.600210   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:46:24.607772   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 18:46:24.607829   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:46:24.615311   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:46:24.623206   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 18:46:24.623270   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:46:24.631351   44722 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 18:46:24.746076   44722 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 18:46:24.746546   44722 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 18:46:24.812383   44722 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 18:50:26.971755   44722 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 18:50:26.971788   44722 kubeadm.go:319] 
	I1213 18:50:26.971891   44722 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 18:50:26.975722   44722 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 18:50:26.975775   44722 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 18:50:26.975864   44722 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 18:50:26.975918   44722 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 18:50:26.975952   44722 kubeadm.go:319] OS: Linux
	I1213 18:50:26.975995   44722 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 18:50:26.976042   44722 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 18:50:26.976088   44722 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 18:50:26.976134   44722 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 18:50:26.976181   44722 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 18:50:26.976228   44722 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 18:50:26.976271   44722 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 18:50:26.976318   44722 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 18:50:26.976374   44722 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 18:50:26.976446   44722 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 18:50:26.976550   44722 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 18:50:26.976642   44722 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 18:50:26.976705   44722 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 18:50:26.979839   44722 out.go:252]   - Generating certificates and keys ...
	I1213 18:50:26.979929   44722 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 18:50:26.979994   44722 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 18:50:26.980071   44722 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 18:50:26.980130   44722 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 18:50:26.980204   44722 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 18:50:26.980256   44722 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 18:50:26.980323   44722 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 18:50:26.980389   44722 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 18:50:26.980463   44722 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 18:50:26.980534   44722 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 18:50:26.980570   44722 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 18:50:26.980625   44722 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 18:50:26.980698   44722 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 18:50:26.980766   44722 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 18:50:26.980827   44722 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 18:50:26.980893   44722 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 18:50:26.980947   44722 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 18:50:26.981062   44722 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 18:50:26.981134   44722 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 18:50:26.984046   44722 out.go:252]   - Booting up control plane ...
	I1213 18:50:26.984213   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 18:50:26.984302   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 18:50:26.984406   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 18:50:26.984526   44722 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 18:50:26.984621   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 18:50:26.984728   44722 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 18:50:26.984811   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 18:50:26.984849   44722 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 18:50:26.984978   44722 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 18:50:26.985109   44722 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 18:50:26.985193   44722 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000261471s
	I1213 18:50:26.985199   44722 kubeadm.go:319] 
	I1213 18:50:26.985265   44722 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 18:50:26.985304   44722 kubeadm.go:319] 	- The kubelet is not running
	I1213 18:50:26.985407   44722 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 18:50:26.985410   44722 kubeadm.go:319] 
	I1213 18:50:26.985524   44722 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 18:50:26.985559   44722 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 18:50:26.985594   44722 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 18:50:26.985645   44722 kubeadm.go:319] 
	W1213 18:50:26.985723   44722 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000261471s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 18:50:26.989121   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 18:50:27.401657   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 18:50:27.414174   44722 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 18:50:27.414227   44722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:50:27.422069   44722 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 18:50:27.422079   44722 kubeadm.go:158] found existing configuration files:
	
	I1213 18:50:27.422131   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:50:27.429688   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 18:50:27.429740   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 18:50:27.436848   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:50:27.444475   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 18:50:27.444539   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:50:27.451626   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:50:27.458858   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 18:50:27.458912   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:50:27.466216   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:50:27.473793   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 18:50:27.473846   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:50:27.481268   44722 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 18:50:27.532748   44722 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 18:50:27.532805   44722 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 18:50:27.602576   44722 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 18:50:27.602639   44722 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 18:50:27.602674   44722 kubeadm.go:319] OS: Linux
	I1213 18:50:27.602718   44722 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 18:50:27.602765   44722 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 18:50:27.602811   44722 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 18:50:27.602858   44722 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 18:50:27.602905   44722 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 18:50:27.602952   44722 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 18:50:27.602996   44722 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 18:50:27.603043   44722 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 18:50:27.603088   44722 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 18:50:27.670270   44722 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 18:50:27.670407   44722 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 18:50:27.670497   44722 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 18:50:27.681577   44722 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 18:50:27.686860   44722 out.go:252]   - Generating certificates and keys ...
	I1213 18:50:27.686961   44722 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 18:50:27.687031   44722 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 18:50:27.687115   44722 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 18:50:27.687184   44722 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 18:50:27.687264   44722 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 18:50:27.687325   44722 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 18:50:27.687398   44722 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 18:50:27.687471   44722 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 18:50:27.687593   44722 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 18:50:27.687675   44722 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 18:50:27.687715   44722 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 18:50:27.687778   44722 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 18:50:28.283128   44722 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 18:50:28.400218   44722 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 18:50:28.813695   44722 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 18:50:29.036602   44722 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 18:50:29.078002   44722 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 18:50:29.078680   44722 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 18:50:29.081273   44722 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 18:50:29.084492   44722 out.go:252]   - Booting up control plane ...
	I1213 18:50:29.084588   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 18:50:29.084675   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 18:50:29.086298   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 18:50:29.101051   44722 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 18:50:29.101487   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 18:50:29.109109   44722 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 18:50:29.109586   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 18:50:29.109636   44722 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 18:50:29.237458   44722 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 18:50:29.237571   44722 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 18:54:29.237512   44722 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000245862s
	I1213 18:54:29.237544   44722 kubeadm.go:319] 
	I1213 18:54:29.237597   44722 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 18:54:29.237627   44722 kubeadm.go:319] 	- The kubelet is not running
	I1213 18:54:29.237724   44722 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 18:54:29.237728   44722 kubeadm.go:319] 
	I1213 18:54:29.237836   44722 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 18:54:29.237865   44722 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 18:54:29.237893   44722 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 18:54:29.237896   44722 kubeadm.go:319] 
	I1213 18:54:29.241945   44722 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 18:54:29.242401   44722 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 18:54:29.242519   44722 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 18:54:29.242782   44722 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 18:54:29.242790   44722 kubeadm.go:319] 
	I1213 18:54:29.242854   44722 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 18:54:29.242916   44722 kubeadm.go:403] duration metric: took 12m7.716453663s to StartCluster
	I1213 18:54:29.242947   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:54:29.243009   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:54:29.267936   44722 cri.go:89] found id: ""
	I1213 18:54:29.267953   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.267960   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:54:29.267966   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:54:29.268023   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:54:29.295961   44722 cri.go:89] found id: ""
	I1213 18:54:29.295975   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.295982   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:54:29.295987   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:54:29.296049   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:54:29.321287   44722 cri.go:89] found id: ""
	I1213 18:54:29.321301   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.321308   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:54:29.321313   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:54:29.321369   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:54:29.346752   44722 cri.go:89] found id: ""
	I1213 18:54:29.346766   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.346773   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:54:29.346778   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:54:29.346840   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:54:29.373200   44722 cri.go:89] found id: ""
	I1213 18:54:29.373214   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.373222   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:54:29.373227   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:54:29.373284   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:54:29.399377   44722 cri.go:89] found id: ""
	I1213 18:54:29.399390   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.399397   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:54:29.399403   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:54:29.399459   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:54:29.427837   44722 cri.go:89] found id: ""
	I1213 18:54:29.427851   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.427867   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:54:29.427876   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:54:29.427886   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:54:29.456109   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:54:29.456125   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:54:29.522138   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:54:29.522156   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:54:29.533671   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:54:29.533686   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:54:29.610367   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:54:29.601277   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.601976   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.603577   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.604094   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.605709   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:54:29.601277   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.601976   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.603577   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.604094   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.605709   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:54:29.610381   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:54:29.610392   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 18:54:29.688966   44722 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 18:54:29.689015   44722 out.go:285] * 
	W1213 18:54:29.689125   44722 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 18:54:29.689180   44722 out.go:285] * 
	W1213 18:54:29.691288   44722 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:54:29.696180   44722 out.go:203] 
	W1213 18:54:29.699069   44722 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 18:54:29.699113   44722 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 18:54:29.699131   44722 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 18:54:29.702236   44722 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.674061922Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=54566989-a940-4ea0-9cb7-11a5ead5fdab name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.67476674Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=9907b75f-aebf-4fc7-948f-3e37eff08342 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675335917Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a5823f6b-c128-468c-ad19-87c38dcb3493 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675801504Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=eb5c5b0d-734a-42c7-beea-2ae04458cd2c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676236125Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=dc8b8dc3-cec8-44a2-afbb-932c674af235 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676718434Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=fae4abe6-592a-492b-809b-edd01682c93f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.677348338Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=21883f8b-9b90-4bb8-9843-c91d88abb931 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738192708Z" level=info msg="Checking image status: kicbase/echo-server:functional-752103" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738390305Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738442195Z" level=info msg="Image kicbase/echo-server:functional-752103 not found" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738517559Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-752103 found" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772733363Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-752103" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772935481Z" level=info msg="Image docker.io/kicbase/echo-server:functional-752103 not found" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772986583Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-752103 found" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.820407985Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-752103" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.820709337Z" level=info msg="Image localhost/kicbase/echo-server:functional-752103 not found" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.82083637Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-752103 found" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697715247Z" level=info msg="Checking image status: kicbase/echo-server:functional-752103" id=0cc0ac5d-dc21-442d-8110-bad1c5434563 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697864812Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697906043Z" level=info msg="Image kicbase/echo-server:functional-752103 not found" id=0cc0ac5d-dc21-442d-8110-bad1c5434563 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697969526Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-752103 found" id=0cc0ac5d-dc21-442d-8110-bad1c5434563 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.726188607Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-752103" id=ec1be44c-64b8-4c7c-9111-17fc0443252c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.726323311Z" level=info msg="Image docker.io/kicbase/echo-server:functional-752103 not found" id=ec1be44c-64b8-4c7c-9111-17fc0443252c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.726371377Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-752103 found" id=ec1be44c-64b8-4c7c-9111-17fc0443252c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.758302806Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-752103" id=f3f55714-9794-4e76-a331-e7982a0121c6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:56:37.491477   23313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:56:37.492261   23313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:56:37.493832   23313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:56:37.494358   23313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:56:37.495991   23313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	[Dec13 18:42] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 18:56:37 up  1:39,  0 user,  load average: 0.38, 0.31, 0.33
	Linux functional-752103 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 18:56:34 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:56:35 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1128.
	Dec 13 18:56:35 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:35 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:35 functional-752103 kubelet[23203]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:35 functional-752103 kubelet[23203]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:35 functional-752103 kubelet[23203]: E1213 18:56:35.630195   23203 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:56:35 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:56:35 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:56:36 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1129.
	Dec 13 18:56:36 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:36 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:36 functional-752103 kubelet[23208]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:36 functional-752103 kubelet[23208]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:36 functional-752103 kubelet[23208]: E1213 18:56:36.382459   23208 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:56:36 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:56:36 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:56:37 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1130.
	Dec 13 18:56:37 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:37 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:56:37 functional-752103 kubelet[23230]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:37 functional-752103 kubelet[23230]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:56:37 functional-752103 kubelet[23230]: E1213 18:56:37.138633   23230 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:56:37 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:56:37 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103: exit status 2 (381.57942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-752103" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 18:54:56.061311    4637 retry.go:31] will retry after 3.206668553s: Temporary Error: Get "http://10.100.89.176": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 18:55:09.268477    4637 retry.go:31] will retry after 5.426184985s: Temporary Error: Get "http://10.100.89.176": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 18:55:24.694938    4637 retry.go:31] will retry after 9.151808817s: Temporary Error: Get "http://10.100.89.176": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 18:55:43.848289    4637 retry.go:31] will retry after 13.155752998s: Temporary Error: Get "http://10.100.89.176": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 18:56:07.005438    4637 retry.go:31] will retry after 18.748491217s: Temporary Error: Get "http://10.100.89.176": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1213 18:57:48.004351    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103: exit status 2 (336.947911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-752103" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-752103
helpers_test.go:244: (dbg) docker inspect functional-752103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	        "Created": "2025-12-13T18:27:36.869398923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:27:36.933863328Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hosts",
	        "LogPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b-json.log",
	        "Name": "/functional-752103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-752103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-752103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	                "LowerDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-752103",
	                "Source": "/var/lib/docker/volumes/functional-752103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-752103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-752103",
	                "name.minikube.sigs.k8s.io": "functional-752103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625ea12887c8956887678f2408d6edd5b98f62bce458a6906f4f662a3001a53b",
	            "SandboxKey": "/var/run/docker/netns/625ea12887c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-752103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:2c:83:4a:30:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84df48e9f7dac8c6a1b67709e5eea216d99d3f16eb50b96c7f0e4a82b3193d56",
	                    "EndpointID": "e69b1f9610d40396647a2d78f0170c31b9cd8e641fc8465e742649cccee8e591",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-752103",
	                        "d72b547cdcc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103: exit status 2 (306.215481ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3505430281/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh            │ functional-752103 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh            │ functional-752103 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh            │ functional-752103 ssh -- ls -la /mount-9p                                                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh            │ functional-752103 ssh sudo umount -f /mount-9p                                                                                                      │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh            │ functional-752103 ssh findmnt -T /mount1                                                                                                            │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ mount          │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount1 --alsologtostderr -v=1                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ mount          │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount2 --alsologtostderr -v=1                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ mount          │ -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount3 --alsologtostderr -v=1                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ ssh            │ functional-752103 ssh findmnt -T /mount1                                                                                                            │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh            │ functional-752103 ssh findmnt -T /mount2                                                                                                            │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh            │ functional-752103 ssh findmnt -T /mount3                                                                                                            │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ mount          │ -p functional-752103 --kill=true                                                                                                                    │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ start          │ -p functional-752103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-752103 --alsologtostderr -v=1                                                                                      │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ update-context │ functional-752103 update-context --alsologtostderr -v=2                                                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ update-context │ functional-752103 update-context --alsologtostderr -v=2                                                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ update-context │ functional-752103 update-context --alsologtostderr -v=2                                                                                             │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ image          │ functional-752103 image ls --format short --alsologtostderr                                                                                         │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ image          │ functional-752103 image ls --format yaml --alsologtostderr                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ ssh            │ functional-752103 ssh pgrep buildkitd                                                                                                               │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │                     │
	│ image          │ functional-752103 image build -t localhost/my-image:functional-752103 testdata/build --alsologtostderr                                              │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ image          │ functional-752103 image ls                                                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ image          │ functional-752103 image ls --format json --alsologtostderr                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	│ image          │ functional-752103 image ls --format table --alsologtostderr                                                                                         │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:56 UTC │ 13 Dec 25 18:56 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:56:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:56:51.866136   63710 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:56:51.866293   63710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:56:51.866306   63710 out.go:374] Setting ErrFile to fd 2...
	I1213 18:56:51.866312   63710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:56:51.866680   63710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:56:51.867086   63710 out.go:368] Setting JSON to false
	I1213 18:56:51.867979   63710 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5964,"bootTime":1765646248,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:56:51.868051   63710 start.go:143] virtualization:  
	I1213 18:56:51.873271   63710 out.go:179] * [functional-752103] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 18:56:51.876287   63710 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:56:51.876379   63710 notify.go:221] Checking for updates...
	I1213 18:56:51.882774   63710 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:56:51.885834   63710 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:56:51.888894   63710 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:56:51.891868   63710 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:56:51.894781   63710 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:56:51.898170   63710 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:56:51.898807   63710 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:56:51.935030   63710 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:56:51.935207   63710 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:56:51.996198   63710 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:56:51.98661626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:56:51.997457   63710 docker.go:319] overlay module found
	I1213 18:56:52.001965   63710 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 18:56:52.005039   63710 start.go:309] selected driver: docker
	I1213 18:56:52.005084   63710 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:56:52.005188   63710 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:56:52.008846   63710 out.go:203] 
	W1213 18:56:52.011880   63710 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 18:56:52.014872   63710 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.674061922Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=54566989-a940-4ea0-9cb7-11a5ead5fdab name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.67476674Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=9907b75f-aebf-4fc7-948f-3e37eff08342 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675335917Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a5823f6b-c128-468c-ad19-87c38dcb3493 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675801504Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=eb5c5b0d-734a-42c7-beea-2ae04458cd2c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676236125Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=dc8b8dc3-cec8-44a2-afbb-932c674af235 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676718434Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=fae4abe6-592a-492b-809b-edd01682c93f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.677348338Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=21883f8b-9b90-4bb8-9843-c91d88abb931 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738192708Z" level=info msg="Checking image status: kicbase/echo-server:functional-752103" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738390305Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738442195Z" level=info msg="Image kicbase/echo-server:functional-752103 not found" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738517559Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-752103 found" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772733363Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-752103" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772935481Z" level=info msg="Image docker.io/kicbase/echo-server:functional-752103 not found" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772986583Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-752103 found" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.820407985Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-752103" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.820709337Z" level=info msg="Image localhost/kicbase/echo-server:functional-752103 not found" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.82083637Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-752103 found" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697715247Z" level=info msg="Checking image status: kicbase/echo-server:functional-752103" id=0cc0ac5d-dc21-442d-8110-bad1c5434563 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697864812Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697906043Z" level=info msg="Image kicbase/echo-server:functional-752103 not found" id=0cc0ac5d-dc21-442d-8110-bad1c5434563 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.697969526Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-752103 found" id=0cc0ac5d-dc21-442d-8110-bad1c5434563 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.726188607Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-752103" id=ec1be44c-64b8-4c7c-9111-17fc0443252c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.726323311Z" level=info msg="Image docker.io/kicbase/echo-server:functional-752103 not found" id=ec1be44c-64b8-4c7c-9111-17fc0443252c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.726371377Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-752103 found" id=ec1be44c-64b8-4c7c-9111-17fc0443252c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:42 functional-752103 crio[9949]: time="2025-12-13T18:54:42.758302806Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-752103" id=f3f55714-9794-4e76-a331-e7982a0121c6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:58:47.022266   25496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:58:47.022691   25496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:58:47.024425   25496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:58:47.025130   25496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:58:47.026888   25496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	[Dec13 18:42] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 18:58:47 up  1:41,  0 user,  load average: 0.43, 0.46, 0.39
	Linux functional-752103 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 18:58:44 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:58:45 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1301.
	Dec 13 18:58:45 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:58:45 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:58:45 functional-752103 kubelet[25368]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:58:45 functional-752103 kubelet[25368]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:58:45 functional-752103 kubelet[25368]: E1213 18:58:45.375006   25368 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:58:45 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:58:45 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:58:45 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1302.
	Dec 13 18:58:45 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:58:45 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:58:46 functional-752103 kubelet[25387]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:58:46 functional-752103 kubelet[25387]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:58:46 functional-752103 kubelet[25387]: E1213 18:58:46.058168   25387 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:58:46 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:58:46 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:58:46 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1303.
	Dec 13 18:58:46 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:58:46 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:58:46 functional-752103 kubelet[25460]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:58:46 functional-752103 kubelet[25460]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:58:46 functional-752103 kubelet[25460]: E1213 18:58:46.893707   25460 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:58:46 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:58:46 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103: exit status 2 (313.492741ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-752103" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (3.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-752103 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-752103 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (74.778018ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-752103 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-752103
helpers_test.go:244: (dbg) docker inspect functional-752103:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	        "Created": "2025-12-13T18:27:36.869398923Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T18:27:36.933863328Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/hosts",
	        "LogPath": "/var/lib/docker/containers/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b/d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b-json.log",
	        "Name": "/functional-752103",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-752103:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-752103",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d72b547cdcc2570ccfc38166adeb253d945edb331f44b2042fe690cfd9c9702b",
	                "LowerDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f08fa508e4fa526830ae73227295c5d2e0629c99efbda84852b3df25bd9e170/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-752103",
	                "Source": "/var/lib/docker/volumes/functional-752103/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-752103",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-752103",
	                "name.minikube.sigs.k8s.io": "functional-752103",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "625ea12887c8956887678f2408d6edd5b98f62bce458a6906f4f662a3001a53b",
	            "SandboxKey": "/var/run/docker/netns/625ea12887c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-752103": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:2c:83:4a:30:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "84df48e9f7dac8c6a1b67709e5eea216d99d3f16eb50b96c7f0e4a82b3193d56",
	                    "EndpointID": "e69b1f9610d40396647a2d78f0170c31b9cd8e641fc8465e742649cccee8e591",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-752103",
	                        "d72b547cdcc2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-752103 -n functional-752103: exit status 2 (433.325873ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-752103 logs -n 25: (1.393751549s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-752103 ssh sudo crictl images                                                                                                                    │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                     │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │                     │
	│ cache   │ functional-752103 cache reload                                                                                                                              │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ ssh     │ functional-752103 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                     │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                         │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │ 13 Dec 25 18:42 UTC │
	│ kubectl │ functional-752103 kubectl -- --context functional-752103 get pods                                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │                     │
	│ start   │ -p functional-752103 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                    │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:42 UTC │                     │
	│ cp      │ functional-752103 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                          │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ config  │ functional-752103 config unset cpus                                                                                                                         │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ config  │ functional-752103 config get cpus                                                                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │                     │
	│ config  │ functional-752103 config set cpus 2                                                                                                                         │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ config  │ functional-752103 config get cpus                                                                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ config  │ functional-752103 config unset cpus                                                                                                                         │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ ssh     │ functional-752103 ssh -n functional-752103 sudo cat /home/docker/cp-test.txt                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ config  │ functional-752103 config get cpus                                                                                                                           │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │                     │
	│ license │                                                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ cp      │ functional-752103 cp functional-752103:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp233435850/001/cp-test.txt │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ ssh     │ functional-752103 ssh sudo systemctl is-active docker                                                                                                       │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │                     │
	│ ssh     │ functional-752103 ssh -n functional-752103 sudo cat /home/docker/cp-test.txt                                                                                │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ ssh     │ functional-752103 ssh sudo systemctl is-active containerd                                                                                                   │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │                     │
	│ cp      │ functional-752103 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                   │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ ssh     │ functional-752103 ssh -n functional-752103 sudo cat /tmp/does/not/exist/cp-test.txt                                                                         │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │ 13 Dec 25 18:54 UTC │
	│ image   │ functional-752103 image load --daemon kicbase/echo-server:functional-752103 --alsologtostderr                                                               │ functional-752103 │ jenkins │ v1.37.0 │ 13 Dec 25 18:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:42:16
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:42:16.832380   44722 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:42:16.832482   44722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:42:16.832486   44722 out.go:374] Setting ErrFile to fd 2...
	I1213 18:42:16.832490   44722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:42:16.832750   44722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:42:16.833154   44722 out.go:368] Setting JSON to false
	I1213 18:42:16.833990   44722 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5089,"bootTime":1765646248,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:42:16.834047   44722 start.go:143] virtualization:  
	I1213 18:42:16.838135   44722 out.go:179] * [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:42:16.841728   44722 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:42:16.841798   44722 notify.go:221] Checking for updates...
	I1213 18:42:16.848230   44722 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:42:16.851409   44722 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:42:16.854607   44722 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:42:16.857801   44722 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:42:16.860996   44722 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:42:16.864675   44722 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:42:16.864787   44722 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:42:16.894628   44722 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:42:16.894745   44722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:42:16.957351   44722 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 18:42:16.94760506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:42:16.957447   44722 docker.go:319] overlay module found
	I1213 18:42:16.960782   44722 out.go:179] * Using the docker driver based on existing profile
	I1213 18:42:16.963851   44722 start.go:309] selected driver: docker
	I1213 18:42:16.963862   44722 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:42:16.963972   44722 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:42:16.964069   44722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:42:17.021522   44722 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 18:42:17.012232642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:42:17.021951   44722 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 18:42:17.021974   44722 cni.go:84] Creating CNI manager for ""
	I1213 18:42:17.022024   44722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:42:17.022071   44722 start.go:353] cluster config:
	{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:42:17.025231   44722 out.go:179] * Starting "functional-752103" primary control-plane node in "functional-752103" cluster
	I1213 18:42:17.028293   44722 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:42:17.031266   44722 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:42:17.034129   44722 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:42:17.034163   44722 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 18:42:17.034171   44722 cache.go:65] Caching tarball of preloaded images
	I1213 18:42:17.034196   44722 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:42:17.034259   44722 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 18:42:17.034268   44722 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 18:42:17.034379   44722 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/config.json ...
	I1213 18:42:17.054759   44722 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 18:42:17.054770   44722 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 18:42:17.054784   44722 cache.go:243] Successfully downloaded all kic artifacts
	I1213 18:42:17.054813   44722 start.go:360] acquireMachinesLock for functional-752103: {Name:mkf4ec1d9e1836ef54983db4562aedfd1a9c51c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 18:42:17.054868   44722 start.go:364] duration metric: took 38.187µs to acquireMachinesLock for "functional-752103"
	I1213 18:42:17.054886   44722 start.go:96] Skipping create...Using existing machine configuration
	I1213 18:42:17.054891   44722 fix.go:54] fixHost starting: 
	I1213 18:42:17.055151   44722 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
	I1213 18:42:17.071486   44722 fix.go:112] recreateIfNeeded on functional-752103: state=Running err=<nil>
	W1213 18:42:17.071504   44722 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 18:42:17.074803   44722 out.go:252] * Updating the running docker "functional-752103" container ...
	I1213 18:42:17.074833   44722 machine.go:94] provisionDockerMachine start ...
	I1213 18:42:17.074935   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.093274   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.093585   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.093591   44722 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 18:42:17.244524   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:42:17.244537   44722 ubuntu.go:182] provisioning hostname "functional-752103"
	I1213 18:42:17.244597   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.262380   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.262682   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.262690   44722 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-752103 && echo "functional-752103" | sudo tee /etc/hostname
	I1213 18:42:17.422688   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-752103
	
	I1213 18:42:17.422759   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.440827   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.441150   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.441163   44722 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-752103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-752103/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-752103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 18:42:17.593792   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 18:42:17.593821   44722 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 18:42:17.593841   44722 ubuntu.go:190] setting up certificates
	I1213 18:42:17.593861   44722 provision.go:84] configureAuth start
	I1213 18:42:17.593949   44722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:42:17.612231   44722 provision.go:143] copyHostCerts
	I1213 18:42:17.612297   44722 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 18:42:17.612304   44722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 18:42:17.612382   44722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 18:42:17.612525   44722 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 18:42:17.612528   44722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 18:42:17.612554   44722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 18:42:17.612619   44722 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 18:42:17.612622   44722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 18:42:17.612646   44722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 18:42:17.612700   44722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.functional-752103 san=[127.0.0.1 192.168.49.2 functional-752103 localhost minikube]
	I1213 18:42:17.675451   44722 provision.go:177] copyRemoteCerts
	I1213 18:42:17.675509   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 18:42:17.675551   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.693626   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:17.798419   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 18:42:17.816185   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 18:42:17.833700   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 18:42:17.853857   44722 provision.go:87] duration metric: took 259.975405ms to configureAuth
	I1213 18:42:17.853904   44722 ubuntu.go:206] setting minikube options for container-runtime
	I1213 18:42:17.854123   44722 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:42:17.854230   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:17.879965   44722 main.go:143] libmachine: Using SSH client type: native
	I1213 18:42:17.880277   44722 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1213 18:42:17.880288   44722 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 18:42:18.248633   44722 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 18:42:18.248647   44722 machine.go:97] duration metric: took 1.173808025s to provisionDockerMachine
	I1213 18:42:18.248658   44722 start.go:293] postStartSetup for "functional-752103" (driver="docker")
	I1213 18:42:18.248669   44722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 18:42:18.248743   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 18:42:18.248792   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.266147   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.373221   44722 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 18:42:18.376713   44722 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 18:42:18.376729   44722 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 18:42:18.376740   44722 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 18:42:18.376791   44722 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 18:42:18.376867   44722 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 18:42:18.376940   44722 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts -> hosts in /etc/test/nested/copy/4637
	I1213 18:42:18.376981   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4637
	I1213 18:42:18.384622   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:42:18.402512   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts --> /etc/test/nested/copy/4637/hosts (40 bytes)
	I1213 18:42:18.419539   44722 start.go:296] duration metric: took 170.867557ms for postStartSetup
	I1213 18:42:18.419610   44722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 18:42:18.419664   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.436637   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.538189   44722 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 18:42:18.542827   44722 fix.go:56] duration metric: took 1.487930222s for fixHost
	I1213 18:42:18.542846   44722 start.go:83] releasing machines lock for "functional-752103", held for 1.487968187s
	I1213 18:42:18.542915   44722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-752103
	I1213 18:42:18.560389   44722 ssh_runner.go:195] Run: cat /version.json
	I1213 18:42:18.560434   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.560692   44722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 18:42:18.560748   44722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
	I1213 18:42:18.583551   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.591018   44722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
	I1213 18:42:18.701640   44722 ssh_runner.go:195] Run: systemctl --version
	I1213 18:42:18.800116   44722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 18:42:18.836359   44722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 18:42:18.840572   44722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 18:42:18.840646   44722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 18:42:18.848286   44722 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 18:42:18.848299   44722 start.go:496] detecting cgroup driver to use...
	I1213 18:42:18.848329   44722 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 18:42:18.848379   44722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 18:42:18.864054   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 18:42:18.878242   44722 docker.go:218] disabling cri-docker service (if available) ...
	I1213 18:42:18.878341   44722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 18:42:18.895499   44722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 18:42:18.910156   44722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 18:42:19.020039   44722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 18:42:19.142208   44722 docker.go:234] disabling docker service ...
	I1213 18:42:19.142263   44722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 18:42:19.158384   44722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 18:42:19.171631   44722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 18:42:19.293369   44722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 18:42:19.422037   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 18:42:19.435333   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 18:42:19.449327   44722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 18:42:19.449380   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.458689   44722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 18:42:19.458748   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.467502   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.476408   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.485815   44722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 18:42:19.494237   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.503335   44722 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.511920   44722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 18:42:19.520510   44722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 18:42:19.528006   44722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 18:42:19.535403   44722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:42:19.669317   44722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 18:42:19.868011   44722 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 18:42:19.868104   44722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 18:42:19.871850   44722 start.go:564] Will wait 60s for crictl version
	I1213 18:42:19.871906   44722 ssh_runner.go:195] Run: which crictl
	I1213 18:42:19.875387   44722 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 18:42:19.901618   44722 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 18:42:19.901703   44722 ssh_runner.go:195] Run: crio --version
	I1213 18:42:19.929436   44722 ssh_runner.go:195] Run: crio --version
	I1213 18:42:19.965392   44722 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 18:42:19.968348   44722 cli_runner.go:164] Run: docker network inspect functional-752103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 18:42:19.986389   44722 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 18:42:19.993243   44722 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 18:42:19.996095   44722 kubeadm.go:884] updating cluster {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 18:42:19.996213   44722 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 18:42:19.996291   44722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:42:20.057560   44722 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:42:20.057583   44722 crio.go:433] Images already preloaded, skipping extraction
	I1213 18:42:20.057640   44722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 18:42:20.089218   44722 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 18:42:20.089230   44722 cache_images.go:86] Images are preloaded, skipping loading
	I1213 18:42:20.089236   44722 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 18:42:20.089328   44722 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-752103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 18:42:20.089414   44722 ssh_runner.go:195] Run: crio config
	I1213 18:42:20.177167   44722 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 18:42:20.177187   44722 cni.go:84] Creating CNI manager for ""
	I1213 18:42:20.177196   44722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:42:20.177232   44722 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 18:42:20.177254   44722 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-752103 NodeName:functional-752103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 18:42:20.177418   44722 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-752103"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 18:42:20.177484   44722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 18:42:20.185578   44722 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 18:42:20.185638   44722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 18:42:20.192929   44722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 18:42:20.205146   44722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 18:42:20.217154   44722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1213 18:42:20.229717   44722 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 18:42:20.233247   44722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 18:42:20.353829   44722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 18:42:20.830403   44722 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103 for IP: 192.168.49.2
	I1213 18:42:20.830413   44722 certs.go:195] generating shared ca certs ...
	I1213 18:42:20.830433   44722 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:42:20.830617   44722 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 18:42:20.830683   44722 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 18:42:20.830690   44722 certs.go:257] generating profile certs ...
	I1213 18:42:20.830812   44722 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.key
	I1213 18:42:20.830890   44722 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key.597c6026
	I1213 18:42:20.830949   44722 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key
	I1213 18:42:20.831080   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 18:42:20.831115   44722 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 18:42:20.831122   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 18:42:20.831151   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 18:42:20.831178   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 18:42:20.831204   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 18:42:20.831248   44722 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 18:42:20.831981   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 18:42:20.856838   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 18:42:20.879274   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 18:42:20.903042   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 18:42:20.923306   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 18:42:20.942121   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 18:42:20.960173   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 18:42:20.977612   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 18:42:20.994747   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 18:42:21.015274   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 18:42:21.032852   44722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 18:42:21.049826   44722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 18:42:21.062502   44722 ssh_runner.go:195] Run: openssl version
	I1213 18:42:21.068589   44722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.075691   44722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 18:42:21.083152   44722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.086777   44722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.086838   44722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 18:42:21.127646   44722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 18:42:21.135282   44722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.142547   44722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 18:42:21.150436   44722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.154171   44722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.154226   44722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 18:42:21.195398   44722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 18:42:21.202918   44722 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.210392   44722 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 18:42:21.218018   44722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.221839   44722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.221907   44722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 18:42:21.262578   44722 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 18:42:21.269897   44722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 18:42:21.273658   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 18:42:21.314538   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 18:42:21.355677   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 18:42:21.398275   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 18:42:21.439207   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 18:42:21.480256   44722 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 18:42:21.526473   44722 kubeadm.go:401] StartCluster: {Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:42:21.526551   44722 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 18:42:21.526617   44722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:42:21.557940   44722 cri.go:89] found id: ""
	I1213 18:42:21.558001   44722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 18:42:21.566021   44722 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 18:42:21.566031   44722 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 18:42:21.566081   44722 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 18:42:21.573603   44722 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.574106   44722 kubeconfig.go:125] found "functional-752103" server: "https://192.168.49.2:8441"
	I1213 18:42:21.575413   44722 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 18:42:21.585702   44722 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 18:27:45.810242505 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 18:42:20.222041116 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 18:42:21.585713   44722 kubeadm.go:1161] stopping kube-system containers ...
	I1213 18:42:21.585724   44722 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 18:42:21.585780   44722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 18:42:21.617768   44722 cri.go:89] found id: ""
	I1213 18:42:21.617827   44722 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 18:42:21.635403   44722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:42:21.643636   44722 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 18:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 18:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 18:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 13 18:31 /etc/kubernetes/scheduler.conf
	
	I1213 18:42:21.643708   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:42:21.651764   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:42:21.659161   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.659213   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:42:21.666555   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:42:21.674192   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.674247   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:42:21.681652   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:42:21.689753   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 18:42:21.689823   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:42:21.697372   44722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 18:42:21.705090   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:21.753330   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.314116   44722 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.560761972s)
	I1213 18:42:23.314176   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.523724   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.594421   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 18:42:23.642920   44722 api_server.go:52] waiting for apiserver process to appear ...
	I1213 18:42:23.642986   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:24.143977   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:24.643428   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:25.143550   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:25.643771   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:26.143193   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:26.643175   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:27.143974   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:27.643187   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:28.143912   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:28.643171   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:29.144072   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:29.644225   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:30.144075   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:30.643706   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:31.143172   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:31.643056   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:32.143628   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:32.643125   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:33.143827   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:33.643131   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:34.143247   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:34.643324   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:35.143141   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:35.643248   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:36.143915   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:36.644040   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:37.143715   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:37.643270   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:38.143997   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:38.643143   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:39.144023   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:39.643975   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:40.143050   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:40.643089   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:41.143722   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:41.643477   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:42.143838   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:42.643431   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:43.143175   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:43.643406   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:44.143895   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:44.643143   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:45.144217   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:45.644055   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:46.143137   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:46.644107   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:47.143996   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:47.643160   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:48.143815   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:48.643858   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:49.143166   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:49.644081   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:50.143765   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:50.643065   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:51.143582   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:51.643619   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:52.143220   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:52.643909   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:53.143832   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:53.643709   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:54.143426   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:54.643284   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:55.143992   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:55.643406   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:56.143943   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:56.643844   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:57.143618   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:57.643188   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:58.143857   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:58.643381   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:59.143183   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:42:59.643139   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:00.143730   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:00.643184   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:01.143789   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:01.643677   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:02.143883   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:02.643235   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:03.143175   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:03.643112   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:04.143893   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:04.643955   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:05.144057   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:05.643239   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:06.143229   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:06.643162   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:07.143132   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:07.643342   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:08.143161   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:08.643365   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:09.144023   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:09.643759   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:10.143925   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:10.644116   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:11.143184   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:11.643163   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:12.144081   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:12.643761   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:13.143171   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:13.643174   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:14.143070   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:14.643090   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:15.143762   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:15.643166   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:16.143069   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:16.644103   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:17.143993   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:17.643934   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:18.143216   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:18.643988   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:19.143982   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:19.643766   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:20.143191   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:20.644118   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:21.143094   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:21.644013   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:22.143973   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:22.643967   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:23.143991   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:23.643861   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:23.643960   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:23.674160   44722 cri.go:89] found id: ""
	I1213 18:43:23.674175   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.674182   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:23.674187   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:23.674245   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:23.700540   44722 cri.go:89] found id: ""
	I1213 18:43:23.700554   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.700561   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:23.700566   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:23.700624   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:23.726064   44722 cri.go:89] found id: ""
	I1213 18:43:23.726078   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.726084   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:23.726089   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:23.726148   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:23.752099   44722 cri.go:89] found id: ""
	I1213 18:43:23.752113   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.752120   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:23.752125   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:23.752190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:23.778105   44722 cri.go:89] found id: ""
	I1213 18:43:23.778120   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.778126   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:23.778131   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:23.778193   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:23.806032   44722 cri.go:89] found id: ""
	I1213 18:43:23.806047   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.806054   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:23.806059   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:23.806117   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:23.832635   44722 cri.go:89] found id: ""
	I1213 18:43:23.832649   44722 logs.go:282] 0 containers: []
	W1213 18:43:23.832658   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:23.832667   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:23.832679   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:23.899244   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:23.899262   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:23.910777   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:23.910793   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:23.979546   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:23.970843   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.971479   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973158   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973794   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.975445   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:23.970843   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.971479   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973158   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.973794   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:23.975445   11034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:23.979557   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:23.979567   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:24.055422   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:24.055441   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:26.587216   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:26.602744   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:26.602803   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:26.637528   44722 cri.go:89] found id: ""
	I1213 18:43:26.637543   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.637550   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:26.637555   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:26.637627   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:26.668738   44722 cri.go:89] found id: ""
	I1213 18:43:26.668752   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.668759   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:26.668764   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:26.668820   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:26.694813   44722 cri.go:89] found id: ""
	I1213 18:43:26.694827   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.694834   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:26.694839   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:26.694903   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:26.724152   44722 cri.go:89] found id: ""
	I1213 18:43:26.724165   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.724172   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:26.724177   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:26.724234   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:26.753666   44722 cri.go:89] found id: ""
	I1213 18:43:26.753680   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.753687   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:26.753692   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:26.753751   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:26.778797   44722 cri.go:89] found id: ""
	I1213 18:43:26.778810   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.778817   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:26.778822   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:26.778878   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:26.804095   44722 cri.go:89] found id: ""
	I1213 18:43:26.804108   44722 logs.go:282] 0 containers: []
	W1213 18:43:26.804121   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:26.804128   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:26.804139   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:26.872610   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:26.863726   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.864249   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.865989   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.866485   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.868188   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:26.863726   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.864249   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.865989   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.866485   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:26.868188   11132 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:26.872619   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:26.872629   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:26.941929   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:26.941948   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:26.969504   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:26.969520   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:27.036106   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:27.036126   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:29.549238   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:29.561563   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:29.561629   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:29.595212   44722 cri.go:89] found id: ""
	I1213 18:43:29.595227   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.595234   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:29.595239   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:29.595298   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:29.632368   44722 cri.go:89] found id: ""
	I1213 18:43:29.632382   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.632388   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:29.632393   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:29.632450   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:29.661185   44722 cri.go:89] found id: ""
	I1213 18:43:29.661199   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.661206   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:29.661211   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:29.661271   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:29.686961   44722 cri.go:89] found id: ""
	I1213 18:43:29.686974   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.686981   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:29.686986   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:29.687049   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:29.713104   44722 cri.go:89] found id: ""
	I1213 18:43:29.713118   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.713125   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:29.713130   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:29.713190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:29.738029   44722 cri.go:89] found id: ""
	I1213 18:43:29.738042   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.738049   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:29.738054   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:29.738116   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:29.763765   44722 cri.go:89] found id: ""
	I1213 18:43:29.763779   44722 logs.go:282] 0 containers: []
	W1213 18:43:29.763785   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:29.763793   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:29.763803   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:29.829845   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:29.829864   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:29.841137   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:29.841153   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:29.910214   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:29.900921   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.902099   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903031   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903808   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.904683   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:29.900921   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.902099   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903031   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.903808   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:29.904683   11241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:29.910238   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:29.910251   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:29.979995   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:29.980012   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:32.559824   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:32.569836   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:32.569896   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:32.598661   44722 cri.go:89] found id: ""
	I1213 18:43:32.598675   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.598682   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:32.598687   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:32.598741   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:32.629547   44722 cri.go:89] found id: ""
	I1213 18:43:32.629562   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.629568   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:32.629573   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:32.629650   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:32.654825   44722 cri.go:89] found id: ""
	I1213 18:43:32.654839   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.654846   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:32.654851   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:32.654908   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:32.680611   44722 cri.go:89] found id: ""
	I1213 18:43:32.680625   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.680632   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:32.680637   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:32.680695   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:32.706618   44722 cri.go:89] found id: ""
	I1213 18:43:32.706632   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.706639   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:32.706643   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:32.706702   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:32.730958   44722 cri.go:89] found id: ""
	I1213 18:43:32.730971   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.730978   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:32.730983   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:32.731052   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:32.759159   44722 cri.go:89] found id: ""
	I1213 18:43:32.759172   44722 logs.go:282] 0 containers: []
	W1213 18:43:32.759179   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:32.759186   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:32.759196   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:32.824778   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:32.824797   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:32.835474   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:32.835491   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:32.898129   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:32.889603   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.890366   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.891862   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.892440   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.893974   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:32.889603   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.890366   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.891862   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.892440   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:32.893974   11344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:32.898149   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:32.898160   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:32.970010   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:32.970027   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:35.499162   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:35.510104   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:35.510168   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:35.536034   44722 cri.go:89] found id: ""
	I1213 18:43:35.536054   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.536061   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:35.536066   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:35.536125   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:35.560363   44722 cri.go:89] found id: ""
	I1213 18:43:35.560377   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.560384   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:35.560389   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:35.560447   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:35.595466   44722 cri.go:89] found id: ""
	I1213 18:43:35.595480   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.595486   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:35.595491   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:35.595546   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:35.626296   44722 cri.go:89] found id: ""
	I1213 18:43:35.626310   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.626316   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:35.626321   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:35.626376   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:35.653200   44722 cri.go:89] found id: ""
	I1213 18:43:35.653214   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.653221   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:35.653225   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:35.653322   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:35.678439   44722 cri.go:89] found id: ""
	I1213 18:43:35.678453   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.678459   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:35.678464   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:35.678525   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:35.703934   44722 cri.go:89] found id: ""
	I1213 18:43:35.703948   44722 logs.go:282] 0 containers: []
	W1213 18:43:35.703954   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:35.703962   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:35.703972   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:35.769879   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:35.769897   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:35.781228   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:35.781245   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:35.848304   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:35.840026   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.840682   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842398   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842978   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.844548   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:35.840026   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.840682   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842398   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.842978   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:35.844548   11450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:35.848316   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:35.848327   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:35.917611   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:35.917630   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:38.449407   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:38.459447   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:38.459504   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:38.485144   44722 cri.go:89] found id: ""
	I1213 18:43:38.485156   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.485163   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:38.485179   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:38.485241   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:38.513966   44722 cri.go:89] found id: ""
	I1213 18:43:38.513980   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.513987   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:38.513992   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:38.514050   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:38.540044   44722 cri.go:89] found id: ""
	I1213 18:43:38.540058   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.540065   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:38.540070   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:38.540128   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:38.570046   44722 cri.go:89] found id: ""
	I1213 18:43:38.570060   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.570067   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:38.570072   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:38.570131   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:38.602431   44722 cri.go:89] found id: ""
	I1213 18:43:38.602444   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.602451   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:38.602456   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:38.602513   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:38.631212   44722 cri.go:89] found id: ""
	I1213 18:43:38.631226   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.631233   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:38.631238   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:38.631295   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:38.658361   44722 cri.go:89] found id: ""
	I1213 18:43:38.658375   44722 logs.go:282] 0 containers: []
	W1213 18:43:38.658383   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:38.658391   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:38.658401   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:38.728418   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:38.728436   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:38.739710   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:38.739726   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:38.807705   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:38.799135   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.799833   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.801634   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.802286   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.803965   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:38.799135   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.799833   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.801634   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.802286   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:38.803965   11557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:38.807715   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:38.807726   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:38.876773   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:38.876792   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:41.406031   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:41.416061   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:41.416122   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:41.441164   44722 cri.go:89] found id: ""
	I1213 18:43:41.441178   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.441184   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:41.441189   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:41.441246   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:41.468283   44722 cri.go:89] found id: ""
	I1213 18:43:41.468296   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.468303   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:41.468313   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:41.468369   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:41.492435   44722 cri.go:89] found id: ""
	I1213 18:43:41.492449   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.492456   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:41.492461   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:41.492525   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:41.517861   44722 cri.go:89] found id: ""
	I1213 18:43:41.517874   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.517881   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:41.517886   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:41.517946   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:41.542334   44722 cri.go:89] found id: ""
	I1213 18:43:41.542348   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.542354   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:41.542359   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:41.542420   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:41.566791   44722 cri.go:89] found id: ""
	I1213 18:43:41.566805   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.566812   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:41.566817   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:41.566873   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:41.605333   44722 cri.go:89] found id: ""
	I1213 18:43:41.605347   44722 logs.go:282] 0 containers: []
	W1213 18:43:41.605353   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:41.605361   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:41.605372   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:41.685285   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:41.685307   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:41.719016   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:41.719031   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:41.784620   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:41.784638   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:41.797084   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:41.797099   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:41.863425   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:41.855920   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.856329   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.857901   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.858215   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.859646   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:41.855920   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.856329   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.857901   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.858215   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:41.859646   11672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:44.365147   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:44.375234   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:44.375292   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:44.404071   44722 cri.go:89] found id: ""
	I1213 18:43:44.404084   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.404091   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:44.404100   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:44.404159   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:44.429141   44722 cri.go:89] found id: ""
	I1213 18:43:44.429154   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.429161   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:44.429166   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:44.429235   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:44.453307   44722 cri.go:89] found id: ""
	I1213 18:43:44.453321   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.453328   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:44.453332   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:44.453409   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:44.478549   44722 cri.go:89] found id: ""
	I1213 18:43:44.478563   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.478570   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:44.478576   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:44.478636   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:44.504258   44722 cri.go:89] found id: ""
	I1213 18:43:44.504272   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.504278   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:44.504283   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:44.504340   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:44.528573   44722 cri.go:89] found id: ""
	I1213 18:43:44.528587   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.528594   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:44.528599   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:44.528655   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:44.553529   44722 cri.go:89] found id: ""
	I1213 18:43:44.553555   44722 logs.go:282] 0 containers: []
	W1213 18:43:44.553562   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:44.553570   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:44.553581   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:44.591322   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:44.591339   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:44.676235   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:44.676264   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:44.687308   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:44.687333   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:44.749534   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:44.740808   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.741545   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743186   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743511   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.745093   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:44.740808   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.741545   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743186   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.743511   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:44.745093   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:44.749567   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:44.749577   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:47.317951   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:47.328222   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:47.328296   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:47.357484   44722 cri.go:89] found id: ""
	I1213 18:43:47.357498   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.357515   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:47.357521   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:47.357593   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:47.388340   44722 cri.go:89] found id: ""
	I1213 18:43:47.388354   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.388362   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:47.388367   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:47.388431   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:47.412714   44722 cri.go:89] found id: ""
	I1213 18:43:47.412726   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.412733   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:47.412738   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:47.412794   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:47.437349   44722 cri.go:89] found id: ""
	I1213 18:43:47.437363   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.437369   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:47.437374   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:47.437432   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:47.461369   44722 cri.go:89] found id: ""
	I1213 18:43:47.461383   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.461390   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:47.461395   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:47.461454   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:47.494140   44722 cri.go:89] found id: ""
	I1213 18:43:47.494154   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.494161   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:47.494166   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:47.494223   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:47.519020   44722 cri.go:89] found id: ""
	I1213 18:43:47.519033   44722 logs.go:282] 0 containers: []
	W1213 18:43:47.519040   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:47.519047   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:47.519060   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:47.587741   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:47.587760   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:47.623942   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:47.623957   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:47.696440   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:47.696459   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:47.707187   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:47.707203   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:47.769911   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:47.762074   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.762544   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764216   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764680   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.766131   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:47.762074   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.762544   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764216   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.764680   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:47.766131   11880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:50.270188   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:50.280132   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:50.280190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:50.308672   44722 cri.go:89] found id: ""
	I1213 18:43:50.308686   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.308693   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:50.308699   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:50.308758   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:50.335996   44722 cri.go:89] found id: ""
	I1213 18:43:50.336010   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.336016   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:50.336021   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:50.336080   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:50.361733   44722 cri.go:89] found id: ""
	I1213 18:43:50.361746   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.361753   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:50.361758   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:50.361816   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:50.387122   44722 cri.go:89] found id: ""
	I1213 18:43:50.387137   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.387143   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:50.387148   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:50.387204   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:50.411746   44722 cri.go:89] found id: ""
	I1213 18:43:50.411760   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.411766   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:50.411771   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:50.411828   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:50.439079   44722 cri.go:89] found id: ""
	I1213 18:43:50.439093   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.439100   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:50.439104   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:50.439158   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:50.464264   44722 cri.go:89] found id: ""
	I1213 18:43:50.464278   44722 logs.go:282] 0 containers: []
	W1213 18:43:50.464285   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:50.464293   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:50.464303   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:50.530938   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:50.530956   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:50.541880   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:50.541897   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:50.622277   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:50.613287   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.613702   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615208   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615836   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.616931   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:50.613287   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.613702   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615208   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.615836   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:50.616931   11970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:50.622299   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:50.622311   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:50.693744   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:50.693765   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:53.224830   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:53.235168   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:53.235224   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:53.261284   44722 cri.go:89] found id: ""
	I1213 18:43:53.261297   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.261304   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:53.261309   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:53.261369   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:53.287104   44722 cri.go:89] found id: ""
	I1213 18:43:53.287118   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.287125   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:53.287136   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:53.287197   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:53.312612   44722 cri.go:89] found id: ""
	I1213 18:43:53.312626   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.312636   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:53.312641   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:53.312700   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:53.338548   44722 cri.go:89] found id: ""
	I1213 18:43:53.338562   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.338570   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:53.338575   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:53.338634   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:53.363849   44722 cri.go:89] found id: ""
	I1213 18:43:53.363862   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.363869   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:53.363874   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:53.363933   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:53.388677   44722 cri.go:89] found id: ""
	I1213 18:43:53.388693   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.388700   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:53.388707   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:53.388764   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:53.413384   44722 cri.go:89] found id: ""
	I1213 18:43:53.413398   44722 logs.go:282] 0 containers: []
	W1213 18:43:53.413405   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:53.413412   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:53.413426   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:53.480895   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:53.480915   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:53.510174   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:53.510191   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:53.579252   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:53.579272   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:53.594356   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:53.594373   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:53.674807   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:53.667137   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.667568   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669097   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669497   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.670996   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:53.667137   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.667568   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669097   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.669497   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:53.670996   12096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:56.175034   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:56.185031   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:56.185091   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:56.210252   44722 cri.go:89] found id: ""
	I1213 18:43:56.210266   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.210273   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:56.210289   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:56.210345   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:56.238190   44722 cri.go:89] found id: ""
	I1213 18:43:56.238204   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.238211   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:56.238216   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:56.238280   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:56.262334   44722 cri.go:89] found id: ""
	I1213 18:43:56.262361   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.262368   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:56.262374   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:56.262439   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:56.286668   44722 cri.go:89] found id: ""
	I1213 18:43:56.286681   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.286688   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:56.286693   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:56.286753   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:56.312401   44722 cri.go:89] found id: ""
	I1213 18:43:56.312426   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.312434   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:56.312439   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:56.312514   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:56.337419   44722 cri.go:89] found id: ""
	I1213 18:43:56.337433   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.337440   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:56.337446   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:56.337512   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:56.363240   44722 cri.go:89] found id: ""
	I1213 18:43:56.363252   44722 logs.go:282] 0 containers: []
	W1213 18:43:56.363259   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:56.363274   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:56.363285   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:56.427558   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:56.427576   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:56.438948   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:56.438963   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:56.504100   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:56.496063   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.496558   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498109   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498537   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.500111   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:56.496063   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.496558   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498109   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.498537   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:56.500111   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:56.504110   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:56.504121   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:56.576300   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:56.576319   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:43:59.120724   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:43:59.131483   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:43:59.131541   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:43:59.161664   44722 cri.go:89] found id: ""
	I1213 18:43:59.161677   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.161684   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:43:59.161689   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:43:59.161747   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:43:59.186541   44722 cri.go:89] found id: ""
	I1213 18:43:59.186554   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.186561   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:43:59.186566   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:43:59.186631   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:43:59.214613   44722 cri.go:89] found id: ""
	I1213 18:43:59.214627   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.214634   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:43:59.214639   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:43:59.214696   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:43:59.239790   44722 cri.go:89] found id: ""
	I1213 18:43:59.239803   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.239810   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:43:59.239815   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:43:59.239881   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:43:59.268177   44722 cri.go:89] found id: ""
	I1213 18:43:59.268191   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.268198   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:43:59.268203   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:43:59.268267   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:43:59.292660   44722 cri.go:89] found id: ""
	I1213 18:43:59.292674   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.292680   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:43:59.292687   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:43:59.292746   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:43:59.318413   44722 cri.go:89] found id: ""
	I1213 18:43:59.318428   44722 logs.go:282] 0 containers: []
	W1213 18:43:59.318434   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:43:59.318442   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:43:59.318453   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:43:59.383565   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:43:59.383584   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:43:59.394753   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:43:59.394770   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:43:59.455757   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:43:59.448022   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.448571   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450046   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450376   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.451813   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:43:59.448022   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.448571   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450046   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.450376   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:43:59.451813   12286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:43:59.455767   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:43:59.455777   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:43:59.527189   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:43:59.527209   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:02.063131   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:02.073460   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:02.073527   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:02.100600   44722 cri.go:89] found id: ""
	I1213 18:44:02.100614   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.100621   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:02.100626   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:02.100683   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:02.128484   44722 cri.go:89] found id: ""
	I1213 18:44:02.128498   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.128505   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:02.128510   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:02.128569   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:02.153979   44722 cri.go:89] found id: ""
	I1213 18:44:02.153994   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.154000   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:02.154005   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:02.154063   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:02.178950   44722 cri.go:89] found id: ""
	I1213 18:44:02.178964   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.178971   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:02.178975   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:02.179034   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:02.203560   44722 cri.go:89] found id: ""
	I1213 18:44:02.203573   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.203599   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:02.203604   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:02.203668   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:02.235040   44722 cri.go:89] found id: ""
	I1213 18:44:02.235054   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.235061   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:02.235066   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:02.235125   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:02.262563   44722 cri.go:89] found id: ""
	I1213 18:44:02.262578   44722 logs.go:282] 0 containers: []
	W1213 18:44:02.262591   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:02.262598   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:02.262610   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:02.330429   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:02.330448   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:02.358932   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:02.358953   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:02.430089   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:02.430108   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:02.441162   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:02.441179   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:02.505804   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:02.496664   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.498082   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.499014   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500016   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500340   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:02.496664   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.498082   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.499014   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500016   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:02.500340   12407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:05.006147   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:05.021965   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:05.022041   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:05.052122   44722 cri.go:89] found id: ""
	I1213 18:44:05.052138   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.052145   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:05.052152   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:05.052213   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:05.079304   44722 cri.go:89] found id: ""
	I1213 18:44:05.079318   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.079325   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:05.079330   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:05.079387   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:05.106489   44722 cri.go:89] found id: ""
	I1213 18:44:05.106502   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.106510   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:05.106515   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:05.106573   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:05.132104   44722 cri.go:89] found id: ""
	I1213 18:44:05.132118   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.132125   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:05.132130   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:05.132186   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:05.157774   44722 cri.go:89] found id: ""
	I1213 18:44:05.157789   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.157795   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:05.157800   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:05.157860   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:05.185228   44722 cri.go:89] found id: ""
	I1213 18:44:05.185241   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.185248   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:05.185254   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:05.185313   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:05.211945   44722 cri.go:89] found id: ""
	I1213 18:44:05.211959   44722 logs.go:282] 0 containers: []
	W1213 18:44:05.211965   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:05.211973   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:05.211982   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:05.240000   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:05.240016   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:05.305313   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:05.305331   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:05.316614   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:05.316628   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:05.380462   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:05.372183   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.373062   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.374815   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.375112   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.376609   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:05.372183   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.373062   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.374815   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.375112   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:05.376609   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:05.380472   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:05.380482   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:07.948856   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:07.959788   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:07.959853   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:07.985640   44722 cri.go:89] found id: ""
	I1213 18:44:07.985655   44722 logs.go:282] 0 containers: []
	W1213 18:44:07.985662   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:07.985667   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:07.985735   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:08.017082   44722 cri.go:89] found id: ""
	I1213 18:44:08.017096   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.017105   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:08.017111   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:08.017176   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:08.046580   44722 cri.go:89] found id: ""
	I1213 18:44:08.046595   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.046603   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:08.046609   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:08.046678   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:08.073255   44722 cri.go:89] found id: ""
	I1213 18:44:08.073269   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.073275   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:08.073281   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:08.073342   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:08.101465   44722 cri.go:89] found id: ""
	I1213 18:44:08.101479   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.101486   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:08.101491   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:08.101560   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:08.126539   44722 cri.go:89] found id: ""
	I1213 18:44:08.126553   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.126559   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:08.126564   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:08.126624   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:08.151274   44722 cri.go:89] found id: ""
	I1213 18:44:08.151287   44722 logs.go:282] 0 containers: []
	W1213 18:44:08.151294   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:08.151301   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:08.151311   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:08.221734   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:08.221760   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:08.234257   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:08.234274   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:08.303822   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:08.293709   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.294557   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.296695   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.297712   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.298655   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:08.293709   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.294557   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.296695   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.297712   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:08.298655   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:08.303834   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:08.303846   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:08.373320   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:08.373340   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:10.905140   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:10.916748   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:10.916820   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:10.944090   44722 cri.go:89] found id: ""
	I1213 18:44:10.944103   44722 logs.go:282] 0 containers: []
	W1213 18:44:10.944111   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:10.944115   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:10.944176   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:10.969154   44722 cri.go:89] found id: ""
	I1213 18:44:10.969168   44722 logs.go:282] 0 containers: []
	W1213 18:44:10.969174   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:10.969179   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:10.969237   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:10.994056   44722 cri.go:89] found id: ""
	I1213 18:44:10.994070   44722 logs.go:282] 0 containers: []
	W1213 18:44:10.994078   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:10.994082   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:10.994195   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:11.026335   44722 cri.go:89] found id: ""
	I1213 18:44:11.026349   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.026356   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:11.026362   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:11.026420   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:11.051618   44722 cri.go:89] found id: ""
	I1213 18:44:11.051632   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.051639   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:11.051644   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:11.051702   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:11.077796   44722 cri.go:89] found id: ""
	I1213 18:44:11.077811   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.077818   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:11.077824   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:11.077885   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:11.106061   44722 cri.go:89] found id: ""
	I1213 18:44:11.106082   44722 logs.go:282] 0 containers: []
	W1213 18:44:11.106089   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:11.106096   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:11.106107   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:11.172632   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:11.164014   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.164956   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.166552   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.167108   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.168668   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:11.164014   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.164956   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.166552   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.167108   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:11.168668   12707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:11.172644   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:11.172654   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:11.241474   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:11.241492   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:11.270376   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:11.270394   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:11.335341   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:11.335360   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:13.846544   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:13.858154   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:13.858216   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:13.891714   44722 cri.go:89] found id: ""
	I1213 18:44:13.891728   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.891735   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:13.891740   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:13.891796   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:13.917089   44722 cri.go:89] found id: ""
	I1213 18:44:13.917103   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.917110   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:13.917115   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:13.917175   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:13.942618   44722 cri.go:89] found id: ""
	I1213 18:44:13.942637   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.942644   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:13.942654   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:13.942717   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:13.972824   44722 cri.go:89] found id: ""
	I1213 18:44:13.972837   44722 logs.go:282] 0 containers: []
	W1213 18:44:13.972844   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:13.972850   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:13.972911   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:14.002454   44722 cri.go:89] found id: ""
	I1213 18:44:14.002478   44722 logs.go:282] 0 containers: []
	W1213 18:44:14.002507   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:14.002515   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:14.002584   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:14.029621   44722 cri.go:89] found id: ""
	I1213 18:44:14.029635   44722 logs.go:282] 0 containers: []
	W1213 18:44:14.029642   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:14.029647   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:14.029705   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:14.059348   44722 cri.go:89] found id: ""
	I1213 18:44:14.059361   44722 logs.go:282] 0 containers: []
	W1213 18:44:14.059368   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:14.059376   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:14.059386   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:14.089028   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:14.089044   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:14.154770   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:14.154787   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:14.165718   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:14.165733   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:14.229870   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:14.221572   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.222738   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.223785   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.224389   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.225986   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:14.221572   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.222738   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.223785   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.224389   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:14.225986   12825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:14.229881   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:14.229893   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:16.799799   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:16.810049   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:16.810109   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:16.841177   44722 cri.go:89] found id: ""
	I1213 18:44:16.841190   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.841197   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:16.841202   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:16.841258   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:16.867562   44722 cri.go:89] found id: ""
	I1213 18:44:16.867576   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.867583   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:16.867588   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:16.867647   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:16.894362   44722 cri.go:89] found id: ""
	I1213 18:44:16.894376   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.894383   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:16.894388   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:16.894449   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:16.922192   44722 cri.go:89] found id: ""
	I1213 18:44:16.922205   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.922212   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:16.922217   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:16.922274   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:16.947061   44722 cri.go:89] found id: ""
	I1213 18:44:16.947081   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.947088   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:16.947093   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:16.947151   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:16.973311   44722 cri.go:89] found id: ""
	I1213 18:44:16.973337   44722 logs.go:282] 0 containers: []
	W1213 18:44:16.973345   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:16.973349   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:16.973409   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:17.002040   44722 cri.go:89] found id: ""
	I1213 18:44:17.002056   44722 logs.go:282] 0 containers: []
	W1213 18:44:17.002077   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:17.002086   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:17.002097   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:17.070995   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:17.062754   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.063352   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.064945   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.065473   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.066944   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:17.062754   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.063352   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.064945   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.065473   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:17.066944   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:17.071005   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:17.071015   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:17.142450   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:17.142467   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:17.174618   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:17.174636   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:17.245843   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:17.245861   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:19.758316   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:19.768061   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:19.768139   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:19.793023   44722 cri.go:89] found id: ""
	I1213 18:44:19.793037   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.793044   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:19.793049   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:19.793113   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:19.817629   44722 cri.go:89] found id: ""
	I1213 18:44:19.817643   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.817649   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:19.817654   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:19.817710   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:19.851145   44722 cri.go:89] found id: ""
	I1213 18:44:19.851159   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.851166   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:19.851170   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:19.851234   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:19.881252   44722 cri.go:89] found id: ""
	I1213 18:44:19.881265   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.881272   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:19.881277   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:19.881339   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:19.912741   44722 cri.go:89] found id: ""
	I1213 18:44:19.912754   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.912761   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:19.912766   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:19.912823   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:19.940085   44722 cri.go:89] found id: ""
	I1213 18:44:19.940098   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.940105   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:19.940110   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:19.940168   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:19.967047   44722 cri.go:89] found id: ""
	I1213 18:44:19.967061   44722 logs.go:282] 0 containers: []
	W1213 18:44:19.967067   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:19.967081   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:19.967092   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:20.039016   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:20.039038   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:20.052809   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:20.052826   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:20.124568   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:20.115906   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.116315   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118019   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118655   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.120394   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:20.115906   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.116315   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118019   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.118655   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:20.120394   13026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:20.124579   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:20.124595   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:20.192989   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:20.193017   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:22.722315   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:22.732622   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:22.732684   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:22.757530   44722 cri.go:89] found id: ""
	I1213 18:44:22.757544   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.757551   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:22.757556   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:22.757614   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:22.783868   44722 cri.go:89] found id: ""
	I1213 18:44:22.783891   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.783899   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:22.783906   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:22.783973   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:22.809581   44722 cri.go:89] found id: ""
	I1213 18:44:22.809602   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.809610   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:22.809615   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:22.809676   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:22.844651   44722 cri.go:89] found id: ""
	I1213 18:44:22.844665   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.844672   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:22.844677   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:22.844734   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:22.878207   44722 cri.go:89] found id: ""
	I1213 18:44:22.878221   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.878228   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:22.878233   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:22.878291   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:22.909295   44722 cri.go:89] found id: ""
	I1213 18:44:22.909309   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.909316   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:22.909322   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:22.909382   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:22.936178   44722 cri.go:89] found id: ""
	I1213 18:44:22.936191   44722 logs.go:282] 0 containers: []
	W1213 18:44:22.936207   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:22.936215   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:22.936225   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:23.005296   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:22.992378   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.993185   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.994804   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.995396   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.997070   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:22.992378   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.993185   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.994804   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.995396   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:22.997070   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:23.005308   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:23.005319   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:23.079778   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:23.079797   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:23.109955   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:23.109982   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:23.176235   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:23.176252   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:25.689578   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:25.699921   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:25.699979   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:25.723877   44722 cri.go:89] found id: ""
	I1213 18:44:25.723891   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.723898   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:25.723902   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:25.723959   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:25.749128   44722 cri.go:89] found id: ""
	I1213 18:44:25.749142   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.749148   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:25.749153   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:25.749209   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:25.773791   44722 cri.go:89] found id: ""
	I1213 18:44:25.773811   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.773818   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:25.773823   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:25.773881   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:25.799904   44722 cri.go:89] found id: ""
	I1213 18:44:25.799917   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.799924   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:25.799929   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:25.799988   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:25.825978   44722 cri.go:89] found id: ""
	I1213 18:44:25.825992   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.825999   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:25.826004   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:25.826061   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:25.861824   44722 cri.go:89] found id: ""
	I1213 18:44:25.861838   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.861854   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:25.861860   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:25.861917   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:25.899196   44722 cri.go:89] found id: ""
	I1213 18:44:25.899209   44722 logs.go:282] 0 containers: []
	W1213 18:44:25.899227   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:25.899235   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:25.899245   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:25.962230   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:25.953208   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.953997   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.955726   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.956332   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.957845   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:25.953208   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.953997   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.955726   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.956332   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:25.957845   13229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:25.962249   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:25.962260   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:26.029250   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:26.029269   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:26.058026   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:26.058045   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:26.126957   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:26.126975   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:28.638630   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:28.649197   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:28.649261   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:28.678140   44722 cri.go:89] found id: ""
	I1213 18:44:28.678155   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.678162   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:28.678166   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:28.678225   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:28.704240   44722 cri.go:89] found id: ""
	I1213 18:44:28.704253   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.704266   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:28.704271   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:28.704332   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:28.729471   44722 cri.go:89] found id: ""
	I1213 18:44:28.729484   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.729492   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:28.729499   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:28.729560   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:28.755384   44722 cri.go:89] found id: ""
	I1213 18:44:28.755397   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.755404   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:28.755419   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:28.755527   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:28.780729   44722 cri.go:89] found id: ""
	I1213 18:44:28.780742   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.780749   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:28.780754   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:28.780819   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:28.807414   44722 cri.go:89] found id: ""
	I1213 18:44:28.807428   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.807434   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:28.807439   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:28.807495   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:28.834478   44722 cri.go:89] found id: ""
	I1213 18:44:28.834492   44722 logs.go:282] 0 containers: []
	W1213 18:44:28.834501   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:28.834509   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:28.834519   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:28.928552   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:28.919277   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.920155   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.921759   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.922310   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.923982   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:28.919277   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.920155   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.921759   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.922310   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:28.923982   13332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:28.928563   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:28.928572   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:28.998427   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:28.998448   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:29.028696   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:29.028713   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:29.094175   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:29.094194   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:31.605517   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:31.616232   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:31.616297   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:31.642711   44722 cri.go:89] found id: ""
	I1213 18:44:31.642725   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.642733   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:31.642738   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:31.642796   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:31.669186   44722 cri.go:89] found id: ""
	I1213 18:44:31.669201   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.669208   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:31.669212   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:31.669271   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:31.696754   44722 cri.go:89] found id: ""
	I1213 18:44:31.696768   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.696775   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:31.696780   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:31.696840   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:31.722602   44722 cri.go:89] found id: ""
	I1213 18:44:31.722616   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.722623   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:31.722628   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:31.722687   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:31.749280   44722 cri.go:89] found id: ""
	I1213 18:44:31.749294   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.749302   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:31.749307   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:31.749386   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:31.774452   44722 cri.go:89] found id: ""
	I1213 18:44:31.774466   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.774473   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:31.774478   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:31.774536   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:31.804250   44722 cri.go:89] found id: ""
	I1213 18:44:31.804264   44722 logs.go:282] 0 containers: []
	W1213 18:44:31.804271   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:31.804278   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:31.804288   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:31.876057   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:31.876075   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:31.887830   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:31.887845   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:31.956181   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:31.947856   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.948537   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950179   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950675   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.952236   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:31.947856   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.948537   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950179   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.950675   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:31.952236   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:31.956191   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:31.956202   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:32.025697   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:32.025716   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:34.558938   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:34.569025   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:34.569094   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:34.598446   44722 cri.go:89] found id: ""
	I1213 18:44:34.598459   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.598466   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:34.598470   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:34.598537   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:34.624087   44722 cri.go:89] found id: ""
	I1213 18:44:34.624105   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.624132   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:34.624137   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:34.624204   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:34.649175   44722 cri.go:89] found id: ""
	I1213 18:44:34.649189   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.649196   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:34.649201   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:34.649257   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:34.679802   44722 cri.go:89] found id: ""
	I1213 18:44:34.679816   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.679823   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:34.679828   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:34.679886   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:34.706842   44722 cri.go:89] found id: ""
	I1213 18:44:34.706856   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.706863   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:34.706868   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:34.706928   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:34.732851   44722 cri.go:89] found id: ""
	I1213 18:44:34.732878   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.732885   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:34.732906   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:34.732972   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:34.758491   44722 cri.go:89] found id: ""
	I1213 18:44:34.758504   44722 logs.go:282] 0 containers: []
	W1213 18:44:34.758511   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:34.758520   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:34.758530   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:34.831184   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:34.831212   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:34.854446   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:34.854463   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:34.939932   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:34.930787   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.931550   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.933427   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.934090   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.935671   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:34.930787   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.931550   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.933427   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.934090   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:34.935671   13548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:34.939943   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:34.939953   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:35.008351   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:35.008373   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:37.538092   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:37.548372   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:37.548433   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:37.576028   44722 cri.go:89] found id: ""
	I1213 18:44:37.576042   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.576049   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:37.576054   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:37.576116   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:37.601240   44722 cri.go:89] found id: ""
	I1213 18:44:37.601264   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.601272   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:37.601277   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:37.601354   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:37.629739   44722 cri.go:89] found id: ""
	I1213 18:44:37.629752   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.629759   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:37.629764   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:37.629821   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:37.659547   44722 cri.go:89] found id: ""
	I1213 18:44:37.659560   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.659567   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:37.659582   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:37.659639   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:37.687820   44722 cri.go:89] found id: ""
	I1213 18:44:37.687833   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.687841   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:37.687846   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:37.687913   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:37.713950   44722 cri.go:89] found id: ""
	I1213 18:44:37.713964   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.713971   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:37.713976   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:37.714035   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:37.739532   44722 cri.go:89] found id: ""
	I1213 18:44:37.739557   44722 logs.go:282] 0 containers: []
	W1213 18:44:37.739564   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:37.739572   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:37.739588   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:37.769815   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:37.769831   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:37.842765   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:37.842782   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:37.856389   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:37.856405   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:37.939080   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:37.930901   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.931464   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933144   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933671   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.935120   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:37.930901   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.931464   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933144   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.933671   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:37.935120   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:37.939091   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:37.939101   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:40.510055   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:40.520003   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:40.520078   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:40.546166   44722 cri.go:89] found id: ""
	I1213 18:44:40.546181   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.546187   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:40.546193   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:40.546255   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:40.575492   44722 cri.go:89] found id: ""
	I1213 18:44:40.575506   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.575512   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:40.575517   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:40.575572   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:40.604021   44722 cri.go:89] found id: ""
	I1213 18:44:40.604034   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.604042   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:40.604047   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:40.604103   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:40.634511   44722 cri.go:89] found id: ""
	I1213 18:44:40.634525   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.634533   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:40.634537   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:40.634597   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:40.659233   44722 cri.go:89] found id: ""
	I1213 18:44:40.659255   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.659263   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:40.659268   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:40.659327   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:40.684289   44722 cri.go:89] found id: ""
	I1213 18:44:40.684314   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.684321   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:40.684326   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:40.684401   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:40.716236   44722 cri.go:89] found id: ""
	I1213 18:44:40.716250   44722 logs.go:282] 0 containers: []
	W1213 18:44:40.716258   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:40.716265   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:40.716277   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:40.743946   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:40.743962   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:40.809441   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:40.809459   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:40.820434   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:40.820458   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:40.906406   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:40.898049   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.898672   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900282   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900803   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.902445   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:40.898049   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.898672   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900282   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.900803   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:40.902445   13768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:40.906416   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:40.906426   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:43.474264   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:43.484255   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:43.484319   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:43.511963   44722 cri.go:89] found id: ""
	I1213 18:44:43.511977   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.511984   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:43.511989   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:43.512049   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:43.537311   44722 cri.go:89] found id: ""
	I1213 18:44:43.537332   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.537339   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:43.537343   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:43.537433   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:43.564197   44722 cri.go:89] found id: ""
	I1213 18:44:43.564211   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.564218   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:43.564222   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:43.564278   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:43.590140   44722 cri.go:89] found id: ""
	I1213 18:44:43.590154   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.590160   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:43.590166   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:43.590226   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:43.615885   44722 cri.go:89] found id: ""
	I1213 18:44:43.615900   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.615916   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:43.615921   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:43.615987   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:43.640848   44722 cri.go:89] found id: ""
	I1213 18:44:43.640862   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.640868   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:43.640873   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:43.640931   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:43.665363   44722 cri.go:89] found id: ""
	I1213 18:44:43.665377   44722 logs.go:282] 0 containers: []
	W1213 18:44:43.665384   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:43.665391   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:43.665403   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:43.676205   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:43.676227   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:43.739640   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:43.731228   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.732007   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.733627   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.734165   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.735773   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:43.731228   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.732007   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.733627   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.734165   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:43.735773   13863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:43.739650   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:43.739661   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:43.807987   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:43.808008   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:43.851586   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:43.851601   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:46.426151   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:46.436240   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:46.436307   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:46.469030   44722 cri.go:89] found id: ""
	I1213 18:44:46.469044   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.469051   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:46.469056   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:46.469115   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:46.494555   44722 cri.go:89] found id: ""
	I1213 18:44:46.494568   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.494575   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:46.494580   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:46.494638   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:46.519291   44722 cri.go:89] found id: ""
	I1213 18:44:46.519305   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.519312   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:46.519316   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:46.519371   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:46.547775   44722 cri.go:89] found id: ""
	I1213 18:44:46.547790   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.547797   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:46.547802   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:46.547860   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:46.572951   44722 cri.go:89] found id: ""
	I1213 18:44:46.572965   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.572972   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:46.572978   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:46.573096   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:46.598953   44722 cri.go:89] found id: ""
	I1213 18:44:46.598967   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.598973   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:46.598979   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:46.599036   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:46.624426   44722 cri.go:89] found id: ""
	I1213 18:44:46.624440   44722 logs.go:282] 0 containers: []
	W1213 18:44:46.624447   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:46.624454   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:46.624465   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:46.656272   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:46.656289   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:46.720505   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:46.720523   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:46.731422   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:46.731438   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:46.794954   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:46.786465   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.786956   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.788689   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.789067   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.790678   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:46.786465   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.786956   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.788689   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.789067   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:46.790678   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:46.794964   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:46.794974   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:49.368713   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:49.379093   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:49.379150   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:49.404638   44722 cri.go:89] found id: ""
	I1213 18:44:49.404652   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.404670   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:49.404676   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:49.404743   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:49.432165   44722 cri.go:89] found id: ""
	I1213 18:44:49.432185   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.432192   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:49.432203   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:49.432274   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:49.457580   44722 cri.go:89] found id: ""
	I1213 18:44:49.457594   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.457601   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:49.457605   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:49.457661   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:49.482518   44722 cri.go:89] found id: ""
	I1213 18:44:49.482531   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.482539   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:49.482544   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:49.482604   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:49.508421   44722 cri.go:89] found id: ""
	I1213 18:44:49.508435   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.508442   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:49.508447   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:49.508505   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:49.533273   44722 cri.go:89] found id: ""
	I1213 18:44:49.533286   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.533293   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:49.533298   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:49.533363   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:49.559407   44722 cri.go:89] found id: ""
	I1213 18:44:49.559421   44722 logs.go:282] 0 containers: []
	W1213 18:44:49.559428   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:49.559436   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:49.559447   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:49.586863   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:49.586880   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:49.655301   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:49.655318   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:49.666641   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:49.666657   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:49.731547   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:49.723390   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.723925   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.725596   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.726135   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.727809   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:49.723390   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.723925   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.725596   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.726135   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:49.727809   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:49.731558   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:49.731569   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:52.302228   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:52.312354   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:52.312414   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:52.339337   44722 cri.go:89] found id: ""
	I1213 18:44:52.339351   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.339358   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:52.339363   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:52.339428   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:52.364722   44722 cri.go:89] found id: ""
	I1213 18:44:52.364736   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.364744   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:52.364748   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:52.364807   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:52.392869   44722 cri.go:89] found id: ""
	I1213 18:44:52.392883   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.392889   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:52.392894   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:52.392952   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:52.420101   44722 cri.go:89] found id: ""
	I1213 18:44:52.420115   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.420122   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:52.420126   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:52.420186   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:52.444708   44722 cri.go:89] found id: ""
	I1213 18:44:52.444721   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.444728   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:52.444733   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:52.444789   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:52.470027   44722 cri.go:89] found id: ""
	I1213 18:44:52.470041   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.470048   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:52.470053   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:52.470112   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:52.494761   44722 cri.go:89] found id: ""
	I1213 18:44:52.494775   44722 logs.go:282] 0 containers: []
	W1213 18:44:52.494782   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:52.494789   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:52.494799   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:52.563435   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:52.563455   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:52.597529   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:52.597545   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:52.667889   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:52.667909   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:52.679020   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:52.679036   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:52.744141   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:52.735527   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.736263   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738012   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738630   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.740366   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:52.735527   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.736263   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738012   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.738630   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:52.740366   14192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:55.245804   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:55.256306   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:55.256370   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:55.283000   44722 cri.go:89] found id: ""
	I1213 18:44:55.283013   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.283020   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:55.283025   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:55.283082   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:55.313671   44722 cri.go:89] found id: ""
	I1213 18:44:55.313684   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.313690   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:55.313695   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:55.313755   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:55.342037   44722 cri.go:89] found id: ""
	I1213 18:44:55.342051   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.342059   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:55.342064   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:55.342127   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:55.367525   44722 cri.go:89] found id: ""
	I1213 18:44:55.367538   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.367557   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:55.367562   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:55.367628   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:55.393243   44722 cri.go:89] found id: ""
	I1213 18:44:55.393257   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.393274   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:55.393280   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:55.393353   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:55.418513   44722 cri.go:89] found id: ""
	I1213 18:44:55.418527   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.418534   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:55.418539   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:55.418607   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:55.443468   44722 cri.go:89] found id: ""
	I1213 18:44:55.443483   44722 logs.go:282] 0 containers: []
	W1213 18:44:55.443490   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:55.443500   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:55.443511   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:55.515427   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:55.507029   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.507943   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.509657   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.510148   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.511618   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:55.507029   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.507943   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.509657   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.510148   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:55.511618   14277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:44:55.515437   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:55.515448   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:55.586865   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:55.586885   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:55.616109   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:55.616125   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:55.685952   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:55.685972   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:58.198520   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:44:58.208638   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:44:58.208696   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:44:58.234480   44722 cri.go:89] found id: ""
	I1213 18:44:58.234494   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.234501   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:44:58.234506   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:44:58.234561   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:44:58.258261   44722 cri.go:89] found id: ""
	I1213 18:44:58.258274   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.258281   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:44:58.258287   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:44:58.258358   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:44:58.282891   44722 cri.go:89] found id: ""
	I1213 18:44:58.282904   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.282911   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:44:58.282916   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:44:58.282971   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:44:58.315746   44722 cri.go:89] found id: ""
	I1213 18:44:58.315760   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.315766   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:44:58.315771   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:44:58.315830   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:44:58.340701   44722 cri.go:89] found id: ""
	I1213 18:44:58.340714   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.340721   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:44:58.340726   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:44:58.340792   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:44:58.369974   44722 cri.go:89] found id: ""
	I1213 18:44:58.369987   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.369994   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:44:58.369998   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:44:58.370063   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:44:58.398903   44722 cri.go:89] found id: ""
	I1213 18:44:58.398917   44722 logs.go:282] 0 containers: []
	W1213 18:44:58.398924   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:44:58.398932   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:44:58.398945   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:44:58.468133   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:44:58.468153   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:44:58.495769   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:44:58.495787   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:44:58.562032   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:44:58.562052   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:44:58.573192   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:44:58.573208   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:44:58.639058   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:44:58.631176   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.631711   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633329   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633843   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.635281   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:44:58.631176   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.631711   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633329   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.633843   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:44:58.635281   14404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:01.139326   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:01.150701   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:01.150773   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:01.180572   44722 cri.go:89] found id: ""
	I1213 18:45:01.180597   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.180627   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:01.180632   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:01.180723   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:01.210001   44722 cri.go:89] found id: ""
	I1213 18:45:01.210027   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.210035   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:01.210040   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:01.210144   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:01.240388   44722 cri.go:89] found id: ""
	I1213 18:45:01.240411   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.240419   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:01.240425   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:01.240500   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:01.270469   44722 cri.go:89] found id: ""
	I1213 18:45:01.270485   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.270492   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:01.270498   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:01.270560   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:01.298917   44722 cri.go:89] found id: ""
	I1213 18:45:01.298932   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.298950   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:01.298956   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:01.299047   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:01.326174   44722 cri.go:89] found id: ""
	I1213 18:45:01.326188   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.326195   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:01.326200   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:01.326260   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:01.355316   44722 cri.go:89] found id: ""
	I1213 18:45:01.355331   44722 logs.go:282] 0 containers: []
	W1213 18:45:01.355339   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:01.355348   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:01.355360   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:01.431176   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:01.431206   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:01.443676   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:01.443695   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:01.512045   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:01.503556   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.504288   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506017   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506375   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.508015   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:01.503556   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.504288   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506017   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.506375   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:01.508015   14499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:01.512056   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:01.512066   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:01.581540   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:01.581560   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:04.113152   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:04.126133   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:04.126190   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:04.157022   44722 cri.go:89] found id: ""
	I1213 18:45:04.157037   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.157044   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:04.157050   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:04.157111   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:04.184060   44722 cri.go:89] found id: ""
	I1213 18:45:04.184073   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.184080   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:04.184085   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:04.184144   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:04.210310   44722 cri.go:89] found id: ""
	I1213 18:45:04.210323   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.210330   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:04.210336   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:04.210398   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:04.236685   44722 cri.go:89] found id: ""
	I1213 18:45:04.236700   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.236707   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:04.236712   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:04.236771   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:04.265948   44722 cri.go:89] found id: ""
	I1213 18:45:04.265961   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.265968   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:04.265973   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:04.266029   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:04.291029   44722 cri.go:89] found id: ""
	I1213 18:45:04.291042   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.291049   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:04.291065   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:04.291122   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:04.316748   44722 cri.go:89] found id: ""
	I1213 18:45:04.316762   44722 logs.go:282] 0 containers: []
	W1213 18:45:04.316768   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:04.316787   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:04.316798   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:04.380978   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:04.380996   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:04.392325   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:04.392342   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:04.459627   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:04.451449   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.452151   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.453706   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.454141   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.455629   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:04.451449   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.452151   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.453706   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.454141   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:04.455629   14606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:04.459637   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:04.459648   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:04.527567   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:04.527587   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:07.060097   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:07.070755   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:07.070814   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:07.098777   44722 cri.go:89] found id: ""
	I1213 18:45:07.098790   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.098797   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:07.098802   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:07.098863   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:07.126857   44722 cri.go:89] found id: ""
	I1213 18:45:07.126870   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.126877   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:07.126882   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:07.126938   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:07.154665   44722 cri.go:89] found id: ""
	I1213 18:45:07.154679   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.154686   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:07.154691   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:07.154751   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:07.183998   44722 cri.go:89] found id: ""
	I1213 18:45:07.184011   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.184018   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:07.184023   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:07.184079   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:07.209217   44722 cri.go:89] found id: ""
	I1213 18:45:07.209230   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.209238   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:07.209249   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:07.209309   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:07.238297   44722 cri.go:89] found id: ""
	I1213 18:45:07.238321   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.238328   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:07.238333   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:07.238392   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:07.268115   44722 cri.go:89] found id: ""
	I1213 18:45:07.268130   44722 logs.go:282] 0 containers: []
	W1213 18:45:07.268136   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:07.268144   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:07.268156   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:07.337456   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:07.337475   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:07.365283   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:07.365299   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:07.433864   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:07.433882   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:07.445039   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:07.445055   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:07.509195   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:07.500621   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.500993   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.502681   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.503001   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.504545   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:07.500621   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.500993   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.502681   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.503001   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:07.504545   14726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:10.010342   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:10.026847   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:10.026923   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:10.055758   44722 cri.go:89] found id: ""
	I1213 18:45:10.055773   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.055781   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:10.055786   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:10.055847   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:10.084492   44722 cri.go:89] found id: ""
	I1213 18:45:10.084508   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.084515   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:10.084521   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:10.084579   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:10.124733   44722 cri.go:89] found id: ""
	I1213 18:45:10.124748   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.124756   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:10.124760   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:10.124823   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:10.167562   44722 cri.go:89] found id: ""
	I1213 18:45:10.167575   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.167583   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:10.167588   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:10.167647   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:10.196162   44722 cri.go:89] found id: ""
	I1213 18:45:10.196178   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.196185   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:10.196190   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:10.196251   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:10.222349   44722 cri.go:89] found id: ""
	I1213 18:45:10.222362   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.222370   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:10.222375   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:10.222433   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:10.252822   44722 cri.go:89] found id: ""
	I1213 18:45:10.252838   44722 logs.go:282] 0 containers: []
	W1213 18:45:10.252848   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:10.252856   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:10.252867   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:10.318555   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:10.318574   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:10.330833   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:10.330848   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:10.403119   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:10.391784   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.392505   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394095   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394656   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.396739   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:10.391784   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.392505   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394095   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.394656   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:10.396739   14819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:10.403129   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:10.403139   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:10.476776   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:10.476796   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:13.006030   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:13.016994   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:13.017078   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:13.047302   44722 cri.go:89] found id: ""
	I1213 18:45:13.047316   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.047322   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:13.047327   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:13.047390   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:13.072990   44722 cri.go:89] found id: ""
	I1213 18:45:13.073014   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.073024   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:13.073029   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:13.073086   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:13.104144   44722 cri.go:89] found id: ""
	I1213 18:45:13.104158   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.104165   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:13.104169   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:13.104233   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:13.133122   44722 cri.go:89] found id: ""
	I1213 18:45:13.133135   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.133141   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:13.133147   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:13.133228   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:13.165373   44722 cri.go:89] found id: ""
	I1213 18:45:13.165399   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.165406   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:13.165411   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:13.165473   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:13.191991   44722 cri.go:89] found id: ""
	I1213 18:45:13.192004   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.192012   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:13.192017   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:13.192082   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:13.217774   44722 cri.go:89] found id: ""
	I1213 18:45:13.217788   44722 logs.go:282] 0 containers: []
	W1213 18:45:13.217795   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:13.217802   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:13.217813   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:13.284517   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:13.275477   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.276368   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278192   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278786   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.280431   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:13.275477   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.276368   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278192   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.278786   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:13.280431   14920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:13.284527   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:13.284538   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:13.353730   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:13.353749   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:13.384210   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:13.384225   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:13.452832   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:13.452849   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:15.964206   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:15.976388   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:15.976453   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:16.006122   44722 cri.go:89] found id: ""
	I1213 18:45:16.006136   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.006143   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:16.006149   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:16.006211   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:16.031686   44722 cri.go:89] found id: ""
	I1213 18:45:16.031700   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.031707   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:16.031712   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:16.031768   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:16.057702   44722 cri.go:89] found id: ""
	I1213 18:45:16.057715   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.057722   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:16.057728   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:16.057783   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:16.090888   44722 cri.go:89] found id: ""
	I1213 18:45:16.090913   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.090921   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:16.090927   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:16.090997   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:16.128051   44722 cri.go:89] found id: ""
	I1213 18:45:16.128075   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.128083   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:16.128089   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:16.128160   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:16.157962   44722 cri.go:89] found id: ""
	I1213 18:45:16.157986   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.157993   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:16.157999   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:16.158057   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:16.184049   44722 cri.go:89] found id: ""
	I1213 18:45:16.184063   44722 logs.go:282] 0 containers: []
	W1213 18:45:16.184070   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:16.184077   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:16.184088   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:16.250129   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:16.250149   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:16.261107   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:16.261125   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:16.330408   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:16.321894   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.322673   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324350   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324661   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.326266   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:16.321894   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.322673   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324350   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.324661   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:16.326266   15030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:16.330418   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:16.330428   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:16.398576   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:16.398594   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:18.928496   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:18.938797   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:18.938873   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:18.964909   44722 cri.go:89] found id: ""
	I1213 18:45:18.964924   44722 logs.go:282] 0 containers: []
	W1213 18:45:18.964932   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:18.964939   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:18.964999   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:18.991414   44722 cri.go:89] found id: ""
	I1213 18:45:18.991428   44722 logs.go:282] 0 containers: []
	W1213 18:45:18.991446   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:18.991451   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:18.991508   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:19.021961   44722 cri.go:89] found id: ""
	I1213 18:45:19.021976   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.021983   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:19.021988   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:19.022055   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:19.046931   44722 cri.go:89] found id: ""
	I1213 18:45:19.046945   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.046952   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:19.046957   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:19.047013   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:19.072683   44722 cri.go:89] found id: ""
	I1213 18:45:19.072696   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.072703   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:19.072708   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:19.072778   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:19.100627   44722 cri.go:89] found id: ""
	I1213 18:45:19.100643   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.100651   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:19.100656   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:19.100720   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:19.130142   44722 cri.go:89] found id: ""
	I1213 18:45:19.130157   44722 logs.go:282] 0 containers: []
	W1213 18:45:19.130163   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:19.130171   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:19.130182   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:19.197474   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:19.197494   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:19.208889   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:19.208908   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:19.274541   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:19.265647   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.266238   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.267928   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.268736   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.270556   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:19.265647   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.266238   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.267928   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.268736   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:19.270556   15139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:19.274551   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:19.274561   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:19.342919   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:19.342938   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:21.872871   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:21.883492   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:21.883550   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:21.910011   44722 cri.go:89] found id: ""
	I1213 18:45:21.910025   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.910032   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:21.910037   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:21.910094   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:21.935440   44722 cri.go:89] found id: ""
	I1213 18:45:21.935454   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.935461   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:21.935476   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:21.935535   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:21.970166   44722 cri.go:89] found id: ""
	I1213 18:45:21.970181   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.970188   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:21.970193   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:21.970254   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:21.996521   44722 cri.go:89] found id: ""
	I1213 18:45:21.996544   44722 logs.go:282] 0 containers: []
	W1213 18:45:21.996552   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:21.996557   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:21.996625   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:22.026015   44722 cri.go:89] found id: ""
	I1213 18:45:22.026030   44722 logs.go:282] 0 containers: []
	W1213 18:45:22.026048   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:22.026054   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:22.026136   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:22.052512   44722 cri.go:89] found id: ""
	I1213 18:45:22.052526   44722 logs.go:282] 0 containers: []
	W1213 18:45:22.052533   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:22.052547   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:22.052634   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:22.087211   44722 cri.go:89] found id: ""
	I1213 18:45:22.087242   44722 logs.go:282] 0 containers: []
	W1213 18:45:22.087249   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:22.087258   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:22.087268   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:22.161238   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:22.161256   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:22.172311   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:22.172327   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:22.235337   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:22.226748   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.227404   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229399   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229780   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.231333   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:22.226748   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.227404   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229399   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.229780   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:22.231333   15248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:22.235349   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:22.235360   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:22.304771   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:22.304790   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:24.834025   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:24.844561   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:24.844623   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:24.869497   44722 cri.go:89] found id: ""
	I1213 18:45:24.869512   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.869519   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:24.869524   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:24.869582   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:24.899663   44722 cri.go:89] found id: ""
	I1213 18:45:24.899677   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.899685   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:24.899690   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:24.899750   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:24.929664   44722 cri.go:89] found id: ""
	I1213 18:45:24.929678   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.929685   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:24.929689   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:24.929748   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:24.954943   44722 cri.go:89] found id: ""
	I1213 18:45:24.954957   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.954964   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:24.954969   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:24.955024   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:24.981964   44722 cri.go:89] found id: ""
	I1213 18:45:24.981978   44722 logs.go:282] 0 containers: []
	W1213 18:45:24.981985   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:24.981991   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:24.982048   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:25.024491   44722 cri.go:89] found id: ""
	I1213 18:45:25.024507   44722 logs.go:282] 0 containers: []
	W1213 18:45:25.024514   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:25.024519   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:25.024587   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:25.059717   44722 cri.go:89] found id: ""
	I1213 18:45:25.059732   44722 logs.go:282] 0 containers: []
	W1213 18:45:25.059740   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:25.059747   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:25.059758   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:25.137684   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:25.137709   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:25.152450   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:25.152466   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:25.224073   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:25.215282   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.215897   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.217852   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.218715   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.219908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:25.215282   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.215897   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.217852   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.218715   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:25.219908   15355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:25.224083   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:25.224095   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:25.293145   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:25.293164   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:27.825368   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:27.835872   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:27.835932   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:27.861658   44722 cri.go:89] found id: ""
	I1213 18:45:27.861672   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.861679   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:27.861684   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:27.861742   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:27.886615   44722 cri.go:89] found id: ""
	I1213 18:45:27.886629   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.886636   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:27.886641   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:27.886697   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:27.915655   44722 cri.go:89] found id: ""
	I1213 18:45:27.915669   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.915676   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:27.915681   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:27.915743   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:27.940463   44722 cri.go:89] found id: ""
	I1213 18:45:27.940477   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.940484   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:27.940489   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:27.940546   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:27.970042   44722 cri.go:89] found id: ""
	I1213 18:45:27.970056   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.970063   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:27.970068   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:27.970125   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:27.996687   44722 cri.go:89] found id: ""
	I1213 18:45:27.996702   44722 logs.go:282] 0 containers: []
	W1213 18:45:27.996708   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:27.996714   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:27.996773   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:28.025848   44722 cri.go:89] found id: ""
	I1213 18:45:28.025861   44722 logs.go:282] 0 containers: []
	W1213 18:45:28.025868   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:28.025876   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:28.025894   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:28.104265   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:28.104292   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:28.116838   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:28.116855   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:28.189318   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:28.180911   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.181676   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.183358   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.184009   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.185382   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:28.180911   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.181676   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.183358   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.184009   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:28.185382   15462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:28.189329   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:28.189340   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:28.257409   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:28.257428   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:30.789289   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:30.799688   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:30.799748   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:30.828658   44722 cri.go:89] found id: ""
	I1213 18:45:30.828672   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.828680   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:30.828688   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:30.828748   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:30.854242   44722 cri.go:89] found id: ""
	I1213 18:45:30.854256   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.854263   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:30.854268   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:30.854325   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:30.879211   44722 cri.go:89] found id: ""
	I1213 18:45:30.879225   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.879235   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:30.879241   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:30.879298   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:30.908380   44722 cri.go:89] found id: ""
	I1213 18:45:30.908394   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.908401   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:30.908406   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:30.908462   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:30.934004   44722 cri.go:89] found id: ""
	I1213 18:45:30.934023   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.934030   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:30.934035   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:30.934094   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:30.959088   44722 cri.go:89] found id: ""
	I1213 18:45:30.959101   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.959108   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:30.959113   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:30.959172   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:30.987128   44722 cri.go:89] found id: ""
	I1213 18:45:30.987142   44722 logs.go:282] 0 containers: []
	W1213 18:45:30.987149   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:30.987156   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:30.987167   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:30.999233   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:30.999253   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:31.070686   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:31.062512   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.063387   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.064956   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.065476   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.066859   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:31.062512   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.063387   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.064956   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.065476   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:31.066859   15559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:31.070697   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:31.070708   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:31.149373   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:31.149393   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:31.182467   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:31.182484   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:33.754920   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:33.764984   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:33.765061   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:33.789610   44722 cri.go:89] found id: ""
	I1213 18:45:33.789624   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.789630   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:33.789635   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:33.789694   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:33.814723   44722 cri.go:89] found id: ""
	I1213 18:45:33.814738   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.814744   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:33.814749   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:33.814811   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:33.841835   44722 cri.go:89] found id: ""
	I1213 18:45:33.841848   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.841855   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:33.841860   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:33.841917   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:33.875847   44722 cri.go:89] found id: ""
	I1213 18:45:33.875871   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.875878   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:33.875885   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:33.875953   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:33.903037   44722 cri.go:89] found id: ""
	I1213 18:45:33.903050   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.903057   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:33.903062   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:33.903135   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:33.934423   44722 cri.go:89] found id: ""
	I1213 18:45:33.934437   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.934444   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:33.934449   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:33.934522   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:33.959437   44722 cri.go:89] found id: ""
	I1213 18:45:33.959450   44722 logs.go:282] 0 containers: []
	W1213 18:45:33.959458   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:33.959465   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:33.959475   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:34.024568   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:34.024587   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:34.036558   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:34.036583   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:34.113960   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:34.105595   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.106445   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.107646   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.108191   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.109855   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:34.105595   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.106445   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.107646   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.108191   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:34.109855   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:34.113970   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:34.113988   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:34.186879   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:34.186900   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:36.717771   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:36.731405   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:36.731462   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:36.758511   44722 cri.go:89] found id: ""
	I1213 18:45:36.758525   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.758532   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:36.758537   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:36.758595   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:36.784601   44722 cri.go:89] found id: ""
	I1213 18:45:36.784614   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.784621   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:36.784626   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:36.784683   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:36.813889   44722 cri.go:89] found id: ""
	I1213 18:45:36.813903   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.813910   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:36.813915   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:36.813974   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:36.840673   44722 cri.go:89] found id: ""
	I1213 18:45:36.840687   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.840695   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:36.840701   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:36.840758   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:36.866658   44722 cri.go:89] found id: ""
	I1213 18:45:36.866673   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.866679   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:36.866684   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:36.866761   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:36.893289   44722 cri.go:89] found id: ""
	I1213 18:45:36.893303   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.893311   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:36.893316   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:36.893377   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:36.920158   44722 cri.go:89] found id: ""
	I1213 18:45:36.920171   44722 logs.go:282] 0 containers: []
	W1213 18:45:36.920178   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:36.920186   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:36.920196   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:36.987002   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:36.987021   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:36.999105   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:36.999128   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:37.072378   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:37.063848   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.064510   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066038   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066549   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.067999   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:37.063848   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.064510   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066038   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.066549   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:37.067999   15769 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:37.072390   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:37.072401   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:37.145027   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:37.145047   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:39.682857   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:39.693055   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:39.693114   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:39.717750   44722 cri.go:89] found id: ""
	I1213 18:45:39.717763   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.717771   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:39.717776   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:39.717831   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:39.748452   44722 cri.go:89] found id: ""
	I1213 18:45:39.748466   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.748473   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:39.748478   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:39.748535   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:39.775686   44722 cri.go:89] found id: ""
	I1213 18:45:39.775700   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.775706   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:39.775712   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:39.775773   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:39.801049   44722 cri.go:89] found id: ""
	I1213 18:45:39.801063   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.801070   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:39.801075   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:39.801132   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:39.829545   44722 cri.go:89] found id: ""
	I1213 18:45:39.829559   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.829566   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:39.829571   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:39.829627   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:39.855870   44722 cri.go:89] found id: ""
	I1213 18:45:39.855883   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.855890   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:39.855895   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:39.855951   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:39.880432   44722 cri.go:89] found id: ""
	I1213 18:45:39.880446   44722 logs.go:282] 0 containers: []
	W1213 18:45:39.880452   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:39.880460   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:39.880471   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:39.944602   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:39.936636   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.937539   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939109   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939488   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.940927   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:39.936636   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.937539   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939109   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.939488   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:39.940927   15870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:39.944613   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:39.944623   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:40.014162   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:40.014186   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:40.052762   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:40.052780   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:40.123344   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:40.123364   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:42.639745   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:42.650139   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:42.650196   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:42.674810   44722 cri.go:89] found id: ""
	I1213 18:45:42.674824   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.674831   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:42.674836   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:42.674896   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:42.705498   44722 cri.go:89] found id: ""
	I1213 18:45:42.705512   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.705519   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:42.705524   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:42.705590   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:42.731558   44722 cri.go:89] found id: ""
	I1213 18:45:42.731572   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.731586   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:42.731591   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:42.731650   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:42.758070   44722 cri.go:89] found id: ""
	I1213 18:45:42.758084   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.758098   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:42.758103   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:42.758163   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:42.784043   44722 cri.go:89] found id: ""
	I1213 18:45:42.784057   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.784065   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:42.784069   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:42.784130   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:42.810580   44722 cri.go:89] found id: ""
	I1213 18:45:42.810594   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.810602   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:42.810607   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:42.810667   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:42.837217   44722 cri.go:89] found id: ""
	I1213 18:45:42.837230   44722 logs.go:282] 0 containers: []
	W1213 18:45:42.837237   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:42.837244   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:42.837255   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:42.869269   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:42.869289   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:42.937246   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:42.937265   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:42.948535   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:42.948551   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:43.014525   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:43.006257   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.006741   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008386   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008729   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.010279   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:43.006257   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.006741   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008386   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.008729   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:43.010279   15993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:43.014550   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:43.014561   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:45.585650   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:45.596016   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:45.596081   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:45.621732   44722 cri.go:89] found id: ""
	I1213 18:45:45.621746   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.621753   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:45.621758   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:45.621828   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:45.647999   44722 cri.go:89] found id: ""
	I1213 18:45:45.648013   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.648020   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:45.648025   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:45.648084   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:45.672656   44722 cri.go:89] found id: ""
	I1213 18:45:45.672669   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.672676   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:45.672681   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:45.672737   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:45.697633   44722 cri.go:89] found id: ""
	I1213 18:45:45.697648   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.697655   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:45.697660   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:45.697725   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:45.722938   44722 cri.go:89] found id: ""
	I1213 18:45:45.722957   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.722964   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:45.722969   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:45.723027   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:45.753044   44722 cri.go:89] found id: ""
	I1213 18:45:45.753057   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.753064   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:45.753069   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:45.753139   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:45.777945   44722 cri.go:89] found id: ""
	I1213 18:45:45.777959   44722 logs.go:282] 0 containers: []
	W1213 18:45:45.777966   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:45.777974   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:45.777984   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:45.788618   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:45.788634   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:45.856342   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:45.847135   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.847845   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.849739   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.850385   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.851966   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:45.847135   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.847845   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.849739   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.850385   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:45.851966   16083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:45.856353   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:45.856363   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:45.925928   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:45.925948   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:45.955270   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:45.955286   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:48.526489   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:48.536804   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:48.536878   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:48.564096   44722 cri.go:89] found id: ""
	I1213 18:45:48.564110   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.564116   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:48.564121   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:48.564180   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:48.589084   44722 cri.go:89] found id: ""
	I1213 18:45:48.589098   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.589105   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:48.589117   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:48.589174   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:48.614957   44722 cri.go:89] found id: ""
	I1213 18:45:48.614971   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.614978   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:48.614989   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:48.615045   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:48.639705   44722 cri.go:89] found id: ""
	I1213 18:45:48.639719   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.639725   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:48.639730   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:48.639789   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:48.665151   44722 cri.go:89] found id: ""
	I1213 18:45:48.665165   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.665171   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:48.665176   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:48.665237   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:48.691765   44722 cri.go:89] found id: ""
	I1213 18:45:48.691779   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.691786   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:48.691791   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:48.691846   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:48.718076   44722 cri.go:89] found id: ""
	I1213 18:45:48.718089   44722 logs.go:282] 0 containers: []
	W1213 18:45:48.718096   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:48.718104   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:48.718115   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:48.729150   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:48.729166   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:48.795759   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:48.787631   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.788312   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790025   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790514   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.791993   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:48.787631   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.788312   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790025   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.790514   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:48.791993   16186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:48.795769   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:48.795780   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:48.865101   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:48.865123   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:48.893317   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:48.893332   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:51.461504   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:51.471540   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:51.471603   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:51.496535   44722 cri.go:89] found id: ""
	I1213 18:45:51.496549   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.496556   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:51.496561   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:51.496620   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:51.523516   44722 cri.go:89] found id: ""
	I1213 18:45:51.523530   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.523537   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:51.523542   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:51.523601   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:51.548779   44722 cri.go:89] found id: ""
	I1213 18:45:51.548792   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.548799   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:51.548804   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:51.548862   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:51.574426   44722 cri.go:89] found id: ""
	I1213 18:45:51.574439   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.574446   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:51.574451   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:51.574508   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:51.601095   44722 cri.go:89] found id: ""
	I1213 18:45:51.601116   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.601123   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:51.601128   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:51.601185   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:51.630300   44722 cri.go:89] found id: ""
	I1213 18:45:51.630314   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.630321   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:51.630326   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:51.630388   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:51.658180   44722 cri.go:89] found id: ""
	I1213 18:45:51.658194   44722 logs.go:282] 0 containers: []
	W1213 18:45:51.658200   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:51.658208   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:51.658218   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:51.727599   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:51.727617   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:51.740526   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:51.740543   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:51.824581   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:51.815003   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.815673   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.817551   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.818376   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.820029   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:51.815003   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.815673   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.817551   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.818376   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:51.820029   16296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:51.824598   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:51.824608   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:51.895130   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:51.895149   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:54.423725   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:54.434109   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:54.434167   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:54.461075   44722 cri.go:89] found id: ""
	I1213 18:45:54.461096   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.461104   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:54.461109   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:54.461169   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:54.486465   44722 cri.go:89] found id: ""
	I1213 18:45:54.486479   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.486485   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:54.486490   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:54.486545   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:54.512518   44722 cri.go:89] found id: ""
	I1213 18:45:54.512532   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.512539   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:54.512556   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:54.512613   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:54.539809   44722 cri.go:89] found id: ""
	I1213 18:45:54.539823   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.539830   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:54.539835   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:54.539897   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:54.570146   44722 cri.go:89] found id: ""
	I1213 18:45:54.570159   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.570166   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:54.570170   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:54.570224   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:54.596027   44722 cri.go:89] found id: ""
	I1213 18:45:54.596041   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.596047   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:54.596052   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:54.596113   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:54.623337   44722 cri.go:89] found id: ""
	I1213 18:45:54.623351   44722 logs.go:282] 0 containers: []
	W1213 18:45:54.623358   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:54.623367   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:54.623382   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:45:54.654287   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:54.654305   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:54.720405   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:54.720426   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:54.731640   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:54.731656   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:54.800062   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:54.792084   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.792588   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794071   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794411   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.795882   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:54.792084   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.792588   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794071   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.794411   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:54.795882   16414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:54.800085   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:54.800095   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:57.370530   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:45:57.381975   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:45:57.382044   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:45:57.410748   44722 cri.go:89] found id: ""
	I1213 18:45:57.410761   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.410768   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:45:57.410773   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:45:57.410834   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:45:57.437110   44722 cri.go:89] found id: ""
	I1213 18:45:57.437123   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.437130   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:45:57.437135   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:45:57.437196   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:45:57.463356   44722 cri.go:89] found id: ""
	I1213 18:45:57.463370   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.463377   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:45:57.463381   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:45:57.463436   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:45:57.488350   44722 cri.go:89] found id: ""
	I1213 18:45:57.488364   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.488381   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:45:57.488387   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:45:57.488442   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:45:57.513926   44722 cri.go:89] found id: ""
	I1213 18:45:57.513939   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.513951   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:45:57.513956   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:45:57.514013   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:45:57.539641   44722 cri.go:89] found id: ""
	I1213 18:45:57.539655   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.539661   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:45:57.539666   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:45:57.539722   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:45:57.565672   44722 cri.go:89] found id: ""
	I1213 18:45:57.565686   44722 logs.go:282] 0 containers: []
	W1213 18:45:57.565693   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:45:57.565700   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:45:57.565710   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:45:57.637461   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:45:57.637486   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:45:57.648402   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:45:57.648418   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:45:57.716551   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:45:57.708424   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.708971   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.710676   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.711086   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.712583   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:45:57.708424   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.708971   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.710676   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.711086   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:45:57.712583   16510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:45:57.716567   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:45:57.716579   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:45:57.785661   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:45:57.785681   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:00.318382   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:00.335223   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:00.335290   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:00.415052   44722 cri.go:89] found id: ""
	I1213 18:46:00.415068   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.415075   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:00.415080   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:00.415144   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:00.448025   44722 cri.go:89] found id: ""
	I1213 18:46:00.448039   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.448047   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:00.448052   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:00.448120   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:00.478830   44722 cri.go:89] found id: ""
	I1213 18:46:00.478844   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.478851   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:00.478856   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:00.478915   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:00.510923   44722 cri.go:89] found id: ""
	I1213 18:46:00.510943   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.510951   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:00.510956   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:00.511018   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:00.538053   44722 cri.go:89] found id: ""
	I1213 18:46:00.538068   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.538075   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:00.538080   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:00.538139   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:00.563080   44722 cri.go:89] found id: ""
	I1213 18:46:00.563094   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.563101   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:00.563107   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:00.563162   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:00.588696   44722 cri.go:89] found id: ""
	I1213 18:46:00.588710   44722 logs.go:282] 0 containers: []
	W1213 18:46:00.588716   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:00.588724   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:00.588734   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:00.655165   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:00.655185   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:00.667201   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:00.667217   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:00.732035   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:00.723385   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.723987   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.725839   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.726393   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.728162   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:00.723385   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.723987   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.725839   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.726393   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:00.728162   16613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:00.732045   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:00.732055   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:00.803574   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:00.803592   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:03.335736   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:03.347198   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:03.347266   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:03.376587   44722 cri.go:89] found id: ""
	I1213 18:46:03.376600   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.376625   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:03.376630   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:03.376698   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:03.407284   44722 cri.go:89] found id: ""
	I1213 18:46:03.407298   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.407305   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:03.407310   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:03.407379   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:03.432194   44722 cri.go:89] found id: ""
	I1213 18:46:03.432219   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.432226   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:03.432231   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:03.432297   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:03.461490   44722 cri.go:89] found id: ""
	I1213 18:46:03.461504   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.461520   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:03.461528   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:03.461586   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:03.486500   44722 cri.go:89] found id: ""
	I1213 18:46:03.486514   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.486521   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:03.486526   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:03.486580   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:03.516064   44722 cri.go:89] found id: ""
	I1213 18:46:03.516079   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.516095   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:03.516101   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:03.516173   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:03.543241   44722 cri.go:89] found id: ""
	I1213 18:46:03.543261   44722 logs.go:282] 0 containers: []
	W1213 18:46:03.543269   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:03.543277   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:03.543288   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:03.614698   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:03.606014   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.606848   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.608572   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.609328   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.610814   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:03.606014   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.606848   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.608572   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.609328   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:03.610814   16715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:03.614708   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:03.614719   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:03.683610   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:03.683629   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:03.714101   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:03.714118   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:03.783821   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:03.783841   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:06.296661   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:06.307402   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:06.307473   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:06.342139   44722 cri.go:89] found id: ""
	I1213 18:46:06.342152   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.342159   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:06.342164   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:06.342223   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:06.376710   44722 cri.go:89] found id: ""
	I1213 18:46:06.376724   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.376730   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:06.376735   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:06.376793   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:06.412732   44722 cri.go:89] found id: ""
	I1213 18:46:06.412746   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.412753   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:06.412758   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:06.412814   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:06.445341   44722 cri.go:89] found id: ""
	I1213 18:46:06.445354   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.445360   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:06.445365   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:06.445423   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:06.470587   44722 cri.go:89] found id: ""
	I1213 18:46:06.470601   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.470608   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:06.470613   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:06.470667   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:06.495331   44722 cri.go:89] found id: ""
	I1213 18:46:06.495347   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.495354   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:06.495360   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:06.495420   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:06.521489   44722 cri.go:89] found id: ""
	I1213 18:46:06.521503   44722 logs.go:282] 0 containers: []
	W1213 18:46:06.521510   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:06.521517   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:06.521531   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:06.552192   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:06.552209   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:06.618284   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:06.618302   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:06.630541   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:06.630558   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:06.702858   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:06.695039   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.695585   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697148   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697474   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.698996   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:06.695039   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.695585   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697148   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.697474   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:06.698996   16839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:06.702868   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:06.702881   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:09.275499   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:09.285598   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:09.285657   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:09.313861   44722 cri.go:89] found id: ""
	I1213 18:46:09.313885   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.313893   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:09.313898   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:09.313956   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:09.346645   44722 cri.go:89] found id: ""
	I1213 18:46:09.346661   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.346671   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:09.346677   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:09.346742   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:09.381723   44722 cri.go:89] found id: ""
	I1213 18:46:09.381743   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.381750   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:09.381755   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:09.381842   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:09.415093   44722 cri.go:89] found id: ""
	I1213 18:46:09.415106   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.415113   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:09.415118   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:09.415178   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:09.440412   44722 cri.go:89] found id: ""
	I1213 18:46:09.440426   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.440433   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:09.440438   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:09.440495   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:09.469945   44722 cri.go:89] found id: ""
	I1213 18:46:09.469959   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.469965   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:09.469971   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:09.470037   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:09.495452   44722 cri.go:89] found id: ""
	I1213 18:46:09.495478   44722 logs.go:282] 0 containers: []
	W1213 18:46:09.495486   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:09.495494   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:09.495505   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:09.507701   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:09.507716   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:09.577735   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:09.564499   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.564927   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571154   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571832   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.573056   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:09.564499   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.564927   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571154   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.571832   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:09.573056   16927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:09.577745   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:09.577756   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:09.650543   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:09.650564   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:09.680040   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:09.680057   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:12.249315   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:12.259200   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:12.259257   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:12.284607   44722 cri.go:89] found id: ""
	I1213 18:46:12.284620   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.284627   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:12.284632   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:12.284697   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:12.318167   44722 cri.go:89] found id: ""
	I1213 18:46:12.318180   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.318187   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:12.318191   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:12.318249   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:12.361187   44722 cri.go:89] found id: ""
	I1213 18:46:12.361201   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.361208   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:12.361213   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:12.361270   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:12.396970   44722 cri.go:89] found id: ""
	I1213 18:46:12.396983   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.396990   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:12.396995   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:12.397098   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:12.423202   44722 cri.go:89] found id: ""
	I1213 18:46:12.423215   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.423222   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:12.423227   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:12.423286   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:12.448231   44722 cri.go:89] found id: ""
	I1213 18:46:12.448245   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.448252   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:12.448257   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:12.448314   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:12.477927   44722 cri.go:89] found id: ""
	I1213 18:46:12.477941   44722 logs.go:282] 0 containers: []
	W1213 18:46:12.477949   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:12.477956   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:12.477966   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:12.547816   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:12.547834   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:12.559262   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:12.559280   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:12.622773   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:12.614428   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.615068   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.616576   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.617216   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.618857   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:12.614428   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.615068   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.616576   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.617216   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:12.618857   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:12.622783   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:12.622793   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:12.692295   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:12.692312   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:15.224550   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:15.235025   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:15.235085   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:15.261669   44722 cri.go:89] found id: ""
	I1213 18:46:15.261683   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.261690   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:15.261695   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:15.261755   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:15.290899   44722 cri.go:89] found id: ""
	I1213 18:46:15.290913   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.290920   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:15.290925   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:15.290979   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:15.317538   44722 cri.go:89] found id: ""
	I1213 18:46:15.317551   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.317558   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:15.317563   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:15.317621   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:15.359563   44722 cri.go:89] found id: ""
	I1213 18:46:15.359577   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.359584   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:15.359589   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:15.359645   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:15.395203   44722 cri.go:89] found id: ""
	I1213 18:46:15.395216   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.395223   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:15.395228   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:15.395288   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:15.428291   44722 cri.go:89] found id: ""
	I1213 18:46:15.428304   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.428311   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:15.428316   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:15.428372   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:15.453931   44722 cri.go:89] found id: ""
	I1213 18:46:15.453945   44722 logs.go:282] 0 containers: []
	W1213 18:46:15.453951   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:15.453958   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:15.453969   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:15.521521   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:15.512931   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.513463   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515174   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515484   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.517840   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:15.512931   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.513463   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515174   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.515484   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:15.517840   17130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:15.521531   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:15.521541   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:15.591139   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:15.591160   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:15.622465   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:15.622481   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:15.691330   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:15.691348   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:18.203416   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:18.213952   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:18.214025   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:18.239778   44722 cri.go:89] found id: ""
	I1213 18:46:18.239792   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.239808   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:18.239814   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:18.239879   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:18.264101   44722 cri.go:89] found id: ""
	I1213 18:46:18.264114   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.264121   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:18.264126   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:18.264185   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:18.289302   44722 cri.go:89] found id: ""
	I1213 18:46:18.289316   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.289323   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:18.289328   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:18.289386   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:18.316088   44722 cri.go:89] found id: ""
	I1213 18:46:18.316101   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.316108   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:18.316116   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:18.316174   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:18.351768   44722 cri.go:89] found id: ""
	I1213 18:46:18.351781   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.351788   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:18.351792   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:18.351846   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:18.382427   44722 cri.go:89] found id: ""
	I1213 18:46:18.382441   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.382447   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:18.382452   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:18.382509   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:18.410191   44722 cri.go:89] found id: ""
	I1213 18:46:18.410205   44722 logs.go:282] 0 containers: []
	W1213 18:46:18.410212   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:18.410220   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:18.410230   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:18.473809   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:18.464747   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.465711   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467472   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467819   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.469591   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:18.464747   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.465711   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467472   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.467819   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:18.469591   17232 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:18.473819   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:18.473837   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:18.545360   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:18.545378   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:18.573170   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:18.573186   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:18.638179   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:18.638198   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:21.149461   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:21.159925   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:46:21.159987   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:46:21.185083   44722 cri.go:89] found id: ""
	I1213 18:46:21.185097   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.185104   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:46:21.185109   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:46:21.185169   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:46:21.210110   44722 cri.go:89] found id: ""
	I1213 18:46:21.210124   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.210131   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:46:21.210136   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:46:21.210199   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:46:21.235437   44722 cri.go:89] found id: ""
	I1213 18:46:21.235450   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.235457   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:46:21.235462   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:46:21.235518   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:46:21.264027   44722 cri.go:89] found id: ""
	I1213 18:46:21.264041   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.264061   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:46:21.264067   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:46:21.264134   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:46:21.291534   44722 cri.go:89] found id: ""
	I1213 18:46:21.291548   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.291567   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:46:21.291571   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:46:21.291638   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:46:21.321987   44722 cri.go:89] found id: ""
	I1213 18:46:21.322010   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.322018   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:46:21.322023   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:46:21.322088   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:46:21.354190   44722 cri.go:89] found id: ""
	I1213 18:46:21.354218   44722 logs.go:282] 0 containers: []
	W1213 18:46:21.354225   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:46:21.354232   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:46:21.354242   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:46:21.432072   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:46:21.432092   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:46:21.443924   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:46:21.443941   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:46:21.512256   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:46:21.503676   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.504240   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506119   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506493   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.508024   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:46:21.503676   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.504240   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506119   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.506493   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:46:21.508024   17342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:46:21.512269   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:46:21.512281   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 18:46:21.584867   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:46:21.584887   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:46:24.118323   44722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 18:46:24.129552   44722 kubeadm.go:602] duration metric: took 4m2.563511626s to restartPrimaryControlPlane
	W1213 18:46:24.129614   44722 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 18:46:24.129691   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 18:46:24.541036   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 18:46:24.553708   44722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 18:46:24.561742   44722 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 18:46:24.561810   44722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:46:24.569735   44722 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 18:46:24.569745   44722 kubeadm.go:158] found existing configuration files:
	
	I1213 18:46:24.569794   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:46:24.577570   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 18:46:24.577624   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 18:46:24.584990   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:46:24.592683   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 18:46:24.592744   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:46:24.600210   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:46:24.607772   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 18:46:24.607829   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:46:24.615311   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:46:24.623206   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 18:46:24.623270   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:46:24.631351   44722 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 18:46:24.746076   44722 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 18:46:24.746546   44722 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 18:46:24.812383   44722 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 18:50:26.971755   44722 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 18:50:26.971788   44722 kubeadm.go:319] 
	I1213 18:50:26.971891   44722 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 18:50:26.975722   44722 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 18:50:26.975775   44722 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 18:50:26.975864   44722 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 18:50:26.975918   44722 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 18:50:26.975952   44722 kubeadm.go:319] OS: Linux
	I1213 18:50:26.975995   44722 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 18:50:26.976042   44722 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 18:50:26.976088   44722 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 18:50:26.976134   44722 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 18:50:26.976181   44722 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 18:50:26.976228   44722 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 18:50:26.976271   44722 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 18:50:26.976318   44722 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 18:50:26.976374   44722 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 18:50:26.976446   44722 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 18:50:26.976550   44722 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 18:50:26.976642   44722 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 18:50:26.976705   44722 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 18:50:26.979839   44722 out.go:252]   - Generating certificates and keys ...
	I1213 18:50:26.979929   44722 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 18:50:26.979994   44722 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 18:50:26.980071   44722 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 18:50:26.980130   44722 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 18:50:26.980204   44722 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 18:50:26.980256   44722 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 18:50:26.980323   44722 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 18:50:26.980389   44722 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 18:50:26.980463   44722 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 18:50:26.980534   44722 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 18:50:26.980570   44722 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 18:50:26.980625   44722 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 18:50:26.980698   44722 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 18:50:26.980766   44722 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 18:50:26.980827   44722 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 18:50:26.980893   44722 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 18:50:26.980947   44722 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 18:50:26.981062   44722 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 18:50:26.981134   44722 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 18:50:26.984046   44722 out.go:252]   - Booting up control plane ...
	I1213 18:50:26.984213   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 18:50:26.984302   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 18:50:26.984406   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 18:50:26.984526   44722 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 18:50:26.984621   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 18:50:26.984728   44722 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 18:50:26.984811   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 18:50:26.984849   44722 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 18:50:26.984978   44722 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 18:50:26.985109   44722 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 18:50:26.985193   44722 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000261471s
	I1213 18:50:26.985199   44722 kubeadm.go:319] 
	I1213 18:50:26.985265   44722 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 18:50:26.985304   44722 kubeadm.go:319] 	- The kubelet is not running
	I1213 18:50:26.985407   44722 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 18:50:26.985410   44722 kubeadm.go:319] 
	I1213 18:50:26.985524   44722 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 18:50:26.985559   44722 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 18:50:26.985594   44722 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 18:50:26.985645   44722 kubeadm.go:319] 
	W1213 18:50:26.985723   44722 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000261471s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 18:50:26.989121   44722 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 18:50:27.401657   44722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 18:50:27.414174   44722 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 18:50:27.414227   44722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 18:50:27.422069   44722 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 18:50:27.422079   44722 kubeadm.go:158] found existing configuration files:
	
	I1213 18:50:27.422131   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 18:50:27.429688   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 18:50:27.429740   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 18:50:27.436848   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 18:50:27.444475   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 18:50:27.444539   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 18:50:27.451626   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 18:50:27.458858   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 18:50:27.458912   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 18:50:27.466216   44722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 18:50:27.473793   44722 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 18:50:27.473846   44722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 18:50:27.481268   44722 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 18:50:27.532748   44722 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 18:50:27.532805   44722 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 18:50:27.602576   44722 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 18:50:27.602639   44722 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 18:50:27.602674   44722 kubeadm.go:319] OS: Linux
	I1213 18:50:27.602718   44722 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 18:50:27.602765   44722 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 18:50:27.602811   44722 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 18:50:27.602858   44722 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 18:50:27.602905   44722 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 18:50:27.602952   44722 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 18:50:27.602996   44722 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 18:50:27.603043   44722 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 18:50:27.603088   44722 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 18:50:27.670270   44722 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 18:50:27.670407   44722 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 18:50:27.670497   44722 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 18:50:27.681577   44722 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 18:50:27.686860   44722 out.go:252]   - Generating certificates and keys ...
	I1213 18:50:27.686961   44722 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 18:50:27.687031   44722 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 18:50:27.687115   44722 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 18:50:27.687184   44722 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 18:50:27.687264   44722 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 18:50:27.687325   44722 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 18:50:27.687398   44722 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 18:50:27.687471   44722 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 18:50:27.687593   44722 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 18:50:27.687675   44722 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 18:50:27.687715   44722 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 18:50:27.687778   44722 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 18:50:28.283128   44722 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 18:50:28.400218   44722 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 18:50:28.813695   44722 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 18:50:29.036602   44722 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 18:50:29.078002   44722 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 18:50:29.078680   44722 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 18:50:29.081273   44722 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 18:50:29.084492   44722 out.go:252]   - Booting up control plane ...
	I1213 18:50:29.084588   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 18:50:29.084675   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 18:50:29.086298   44722 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 18:50:29.101051   44722 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 18:50:29.101487   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 18:50:29.109109   44722 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 18:50:29.109586   44722 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 18:50:29.109636   44722 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 18:50:29.237458   44722 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 18:50:29.237571   44722 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 18:54:29.237512   44722 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000245862s
	I1213 18:54:29.237544   44722 kubeadm.go:319] 
	I1213 18:54:29.237597   44722 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 18:54:29.237627   44722 kubeadm.go:319] 	- The kubelet is not running
	I1213 18:54:29.237724   44722 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 18:54:29.237728   44722 kubeadm.go:319] 
	I1213 18:54:29.237836   44722 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 18:54:29.237865   44722 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 18:54:29.237893   44722 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 18:54:29.237896   44722 kubeadm.go:319] 
	I1213 18:54:29.241945   44722 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 18:54:29.242401   44722 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 18:54:29.242519   44722 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 18:54:29.242782   44722 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 18:54:29.242790   44722 kubeadm.go:319] 
	I1213 18:54:29.242854   44722 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 18:54:29.242916   44722 kubeadm.go:403] duration metric: took 12m7.716453663s to StartCluster
	I1213 18:54:29.242947   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 18:54:29.243009   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 18:54:29.267936   44722 cri.go:89] found id: ""
	I1213 18:54:29.267953   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.267960   44722 logs.go:284] No container was found matching "kube-apiserver"
	I1213 18:54:29.267966   44722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 18:54:29.268023   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 18:54:29.295961   44722 cri.go:89] found id: ""
	I1213 18:54:29.295975   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.295982   44722 logs.go:284] No container was found matching "etcd"
	I1213 18:54:29.295987   44722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 18:54:29.296049   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 18:54:29.321287   44722 cri.go:89] found id: ""
	I1213 18:54:29.321301   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.321308   44722 logs.go:284] No container was found matching "coredns"
	I1213 18:54:29.321313   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 18:54:29.321369   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 18:54:29.346752   44722 cri.go:89] found id: ""
	I1213 18:54:29.346766   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.346773   44722 logs.go:284] No container was found matching "kube-scheduler"
	I1213 18:54:29.346778   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 18:54:29.346840   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 18:54:29.373200   44722 cri.go:89] found id: ""
	I1213 18:54:29.373214   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.373222   44722 logs.go:284] No container was found matching "kube-proxy"
	I1213 18:54:29.373227   44722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 18:54:29.373284   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 18:54:29.399377   44722 cri.go:89] found id: ""
	I1213 18:54:29.399390   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.399397   44722 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 18:54:29.399403   44722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 18:54:29.399459   44722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 18:54:29.427837   44722 cri.go:89] found id: ""
	I1213 18:54:29.427851   44722 logs.go:282] 0 containers: []
	W1213 18:54:29.427867   44722 logs.go:284] No container was found matching "kindnet"
	I1213 18:54:29.427876   44722 logs.go:123] Gathering logs for container status ...
	I1213 18:54:29.427886   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 18:54:29.456109   44722 logs.go:123] Gathering logs for kubelet ...
	I1213 18:54:29.456125   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 18:54:29.522138   44722 logs.go:123] Gathering logs for dmesg ...
	I1213 18:54:29.522156   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 18:54:29.533671   44722 logs.go:123] Gathering logs for describe nodes ...
	I1213 18:54:29.533686   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 18:54:29.610367   44722 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:54:29.601277   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.601976   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.603577   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.604094   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.605709   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 18:54:29.601277   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.601976   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.603577   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.604094   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:29.605709   21159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 18:54:29.610381   44722 logs.go:123] Gathering logs for CRI-O ...
	I1213 18:54:29.610392   44722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 18:54:29.688966   44722 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 18:54:29.689015   44722 out.go:285] * 
	W1213 18:54:29.689125   44722 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 18:54:29.689180   44722 out.go:285] * 
	W1213 18:54:29.691288   44722 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 18:54:29.696180   44722 out.go:203] 
	W1213 18:54:29.699069   44722 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245862s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 18:54:29.699113   44722 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 18:54:29.699131   44722 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 18:54:29.702236   44722 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 18:42:19 functional-752103 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.818362642Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=dc50dc13-71bf-495d-a717-281bc180f2f6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.819294668Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=d8721ade-dce9-4153-a322-5ccd7819b97b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.81975854Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=490f044a-8303-4886-ba98-7360ebf1ca73 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.820179047Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=12624e30-2525-4636-9934-824ea63a04cd name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.82056529Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1e7d135f-0cd8-4d54-96f0-f28f4e7904d3 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.820930436Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=30771d6d-e5fc-49d6-aff6-138912d2988b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:46:24 functional-752103 crio[9949]: time="2025-12-13T18:46:24.821514235Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c45e1d7a-3ddb-41a5-9415-d5a2464cfd2b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.674061922Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=54566989-a940-4ea0-9cb7-11a5ead5fdab name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.67476674Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=9907b75f-aebf-4fc7-948f-3e37eff08342 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675335917Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=a5823f6b-c128-468c-ad19-87c38dcb3493 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.675801504Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=eb5c5b0d-734a-42c7-beea-2ae04458cd2c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676236125Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=dc8b8dc3-cec8-44a2-afbb-932c674af235 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.676718434Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=fae4abe6-592a-492b-809b-edd01682c93f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:50:27 functional-752103 crio[9949]: time="2025-12-13T18:50:27.677348338Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=21883f8b-9b90-4bb8-9843-c91d88abb931 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738192708Z" level=info msg="Checking image status: kicbase/echo-server:functional-752103" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738390305Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738442195Z" level=info msg="Image kicbase/echo-server:functional-752103 not found" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.738517559Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-752103 found" id=42880e29-20fa-4822-ab33-09bfec92f2e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772733363Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-752103" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772935481Z" level=info msg="Image docker.io/kicbase/echo-server:functional-752103 not found" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.772986583Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-752103 found" id=0cafe629-ccfc-4817-862d-afdd77db9d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.820407985Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-752103" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.820709337Z" level=info msg="Image localhost/kicbase/echo-server:functional-752103 not found" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 18:54:38 functional-752103 crio[9949]: time="2025-12-13T18:54:38.82083637Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-752103 found" id=6c4fd7ba-e600-48a2-9885-e62592ca43d8 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 18:54:40.006753   21931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:40.007857   21931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:40.008782   21931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:40.010739   21931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 18:54:40.011493   21931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	[Dec13 18:42] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 18:54:40 up  1:37,  0 user,  load average: 0.44, 0.26, 0.32
	Linux functional-752103 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 18:54:37 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:37 functional-752103 kubelet[21729]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:37 functional-752103 kubelet[21729]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:37 functional-752103 kubelet[21729]: E1213 18:54:37.891975   21729 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:54:37 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:54:37 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:54:38 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 972.
	Dec 13 18:54:38 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:38 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:38 functional-752103 kubelet[21767]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:38 functional-752103 kubelet[21767]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:38 functional-752103 kubelet[21767]: E1213 18:54:38.655363   21767 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:54:38 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:54:38 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:54:39 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 973.
	Dec 13 18:54:39 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:39 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:39 functional-752103 kubelet[21836]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:39 functional-752103 kubelet[21836]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 18:54:39 functional-752103 kubelet[21836]: E1213 18:54:39.426276   21836 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 18:54:39 functional-752103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 18:54:39 functional-752103 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 18:54:40 functional-752103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 974.
	Dec 13 18:54:40 functional-752103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 18:54:40 functional-752103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-752103 -n functional-752103: exit status 2 (461.472191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-752103" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (3.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-752103 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-752103 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1213 18:54:45.488999   59748 out.go:360] Setting OutFile to fd 1 ...
I1213 18:54:45.489123   59748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:54:45.489128   59748 out.go:374] Setting ErrFile to fd 2...
I1213 18:54:45.489133   59748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:54:45.489379   59748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
I1213 18:54:45.489613   59748 mustload.go:66] Loading cluster: functional-752103
I1213 18:54:45.489994   59748 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 18:54:45.490425   59748 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
I1213 18:54:45.531272   59748 host.go:66] Checking if "functional-752103" exists ...
I1213 18:54:45.531606   59748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 18:54:45.698011   59748 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:54:45.67995227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:
/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 18:54:45.698125   59748 api_server.go:166] Checking apiserver status ...
I1213 18:54:45.698191   59748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 18:54:45.698240   59748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
I1213 18:54:45.724866   59748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
W1213 18:54:45.851073   59748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 18:54:45.854420   59748 out.go:179] * The control-plane node functional-752103 apiserver is not running: (state=Stopped)
I1213 18:54:45.857630   59748 out.go:179]   To start a cluster, run: "minikube start -p functional-752103"

                                                
                                                
stdout: * The control-plane node functional-752103 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-752103"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-752103 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-752103 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-752103 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-752103 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 59747: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-752103 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-752103 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-752103 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-752103 apply -f testdata/testsvc.yaml: exit status 1 (86.07834ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-752103 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (109.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.100.89.176": Temporary Error: Get "http://10.100.89.176": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-752103 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-752103 get svc nginx-svc: exit status 1 (60.839288ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-752103 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (109.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-752103 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-752103 create deployment hello-node --image kicbase/echo-server: exit status 1 (52.382966ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-752103 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 service list: exit status 103 (305.1881ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-752103 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-752103"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-752103 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-752103 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-752103\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 service list -o json: exit status 103 (271.792663ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-752103 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-752103"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-752103 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 service --namespace=default --https --url hello-node: exit status 103 (251.882272ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-752103 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-752103"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-752103 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 service hello-node --url --format={{.IP}}: exit status 103 (284.376943ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-752103 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-752103"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-752103 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-752103 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-752103\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 service hello-node --url: exit status 103 (261.853919ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-752103 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-752103"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-752103 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-752103 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-752103"
functional_test.go:1579: failed to parse "* The control-plane node functional-752103 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-752103\"": parse "* The control-plane node functional-752103 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-752103\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4261349897/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765652202838952248" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4261349897/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765652202838952248" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4261349897/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765652202838952248" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4261349897/001/test-1765652202838952248
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (372.58777ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 18:56:43.211808    4637 retry.go:31] will retry after 689.817713ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 18:56 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 18:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 18:56 test-1765652202838952248
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh cat /mount-9p/test-1765652202838952248
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-752103 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-752103 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (58.22504ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-752103 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (284.627597ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=40711)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 13 18:56 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 13 18:56 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 13 18:56 test-1765652202838952248
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-752103 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4261349897/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4261349897/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4261349897/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:40711
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4261349897/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4261349897/001:/mount-9p --alsologtostderr -v=1] stderr:
I1213 18:56:42.898241   61844 out.go:360] Setting OutFile to fd 1 ...
I1213 18:56:42.898450   61844 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:42.898467   61844 out.go:374] Setting ErrFile to fd 2...
I1213 18:56:42.898482   61844 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:42.898733   61844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
I1213 18:56:42.899006   61844 mustload.go:66] Loading cluster: functional-752103
I1213 18:56:42.899379   61844 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 18:56:42.899929   61844 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
I1213 18:56:42.921579   61844 host.go:66] Checking if "functional-752103" exists ...
I1213 18:56:42.921884   61844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 18:56:43.023572   61844 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:56:43.011584331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 18:56:43.023779   61844 cli_runner.go:164] Run: docker network inspect functional-752103 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 18:56:43.048348   61844 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4261349897/001 into VM as /mount-9p ...
I1213 18:56:43.051377   61844 out.go:179]   - Mount type:   9p
I1213 18:56:43.054346   61844 out.go:179]   - User ID:      docker
I1213 18:56:43.057162   61844 out.go:179]   - Group ID:     docker
I1213 18:56:43.060144   61844 out.go:179]   - Version:      9p2000.L
I1213 18:56:43.063064   61844 out.go:179]   - Message Size: 262144
I1213 18:56:43.066045   61844 out.go:179]   - Options:      map[]
I1213 18:56:43.068955   61844 out.go:179]   - Bind Address: 192.168.49.1:40711
I1213 18:56:43.071729   61844 out.go:179] * Userspace file server: 
I1213 18:56:43.072043   61844 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 18:56:43.072133   61844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
I1213 18:56:43.102210   61844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
I1213 18:56:43.227679   61844 mount.go:180] unmount for /mount-9p ran successfully
I1213 18:56:43.227717   61844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1213 18:56:43.236072   61844 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=40711,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1213 18:56:43.246429   61844 main.go:127] stdlog: ufs.go:141 connected
I1213 18:56:43.246596   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tversion tag 65535 msize 262144 version '9P2000.L'
I1213 18:56:43.246643   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rversion tag 65535 msize 262144 version '9P2000'
I1213 18:56:43.246858   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1213 18:56:43.246920   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rattach tag 0 aqid (44319 19128553 'd')
I1213 18:56:43.250871   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 0
I1213 18:56:43.250949   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44319 19128553 'd') m d775 at 0 mt 1765652202 l 4096 t 0 d 0 ext )
I1213 18:56:43.254507   61844 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/.mount-process: {Name:mkf2f29f9b5ef9f5eda4965da679c17ffeadb96b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 18:56:43.254716   61844 mount.go:105] mount successful: ""
I1213 18:56:43.258209   61844 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4261349897/001 to /mount-9p
I1213 18:56:43.261170   61844 out.go:203] 
I1213 18:56:43.264048   61844 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1213 18:56:44.437319   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 0
I1213 18:56:44.437406   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44319 19128553 'd') m d775 at 0 mt 1765652202 l 4096 t 0 d 0 ext )
I1213 18:56:44.437755   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Twalk tag 0 fid 0 newfid 1 
I1213 18:56:44.437794   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rwalk tag 0 
I1213 18:56:44.437930   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Topen tag 0 fid 1 mode 0
I1213 18:56:44.437979   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Ropen tag 0 qid (44319 19128553 'd') iounit 0
I1213 18:56:44.438110   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 0
I1213 18:56:44.438147   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44319 19128553 'd') m d775 at 0 mt 1765652202 l 4096 t 0 d 0 ext )
I1213 18:56:44.438295   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tread tag 0 fid 1 offset 0 count 262120
I1213 18:56:44.438419   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rread tag 0 count 258
I1213 18:56:44.438556   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tread tag 0 fid 1 offset 258 count 261862
I1213 18:56:44.438585   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rread tag 0 count 0
I1213 18:56:44.438723   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tread tag 0 fid 1 offset 258 count 262120
I1213 18:56:44.438748   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rread tag 0 count 0
I1213 18:56:44.438888   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 18:56:44.438922   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rwalk tag 0 (4431a 19128553 '') 
I1213 18:56:44.439049   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 2
I1213 18:56:44.439083   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431a 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:44.439205   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 2
I1213 18:56:44.439234   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431a 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:44.439363   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tclunk tag 0 fid 2
I1213 18:56:44.439396   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rclunk tag 0
I1213 18:56:44.439538   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Twalk tag 0 fid 0 newfid 2 0:'test-1765652202838952248' 
I1213 18:56:44.439585   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rwalk tag 0 (4431c 19128553 '') 
I1213 18:56:44.439704   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 2
I1213 18:56:44.439739   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('test-1765652202838952248' 'jenkins' 'jenkins' '' q (4431c 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:44.439871   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 2
I1213 18:56:44.439900   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('test-1765652202838952248' 'jenkins' 'jenkins' '' q (4431c 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:44.440028   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tclunk tag 0 fid 2
I1213 18:56:44.440055   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rclunk tag 0
I1213 18:56:44.440194   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 18:56:44.440236   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rwalk tag 0 (4431b 19128553 '') 
I1213 18:56:44.440357   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 2
I1213 18:56:44.440392   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431b 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:44.440518   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 2
I1213 18:56:44.440549   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431b 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:44.440675   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tclunk tag 0 fid 2
I1213 18:56:44.440697   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rclunk tag 0
I1213 18:56:44.440813   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tread tag 0 fid 1 offset 258 count 262120
I1213 18:56:44.440843   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rread tag 0 count 0
I1213 18:56:44.440969   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tclunk tag 0 fid 1
I1213 18:56:44.440999   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rclunk tag 0
I1213 18:56:44.733003   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Twalk tag 0 fid 0 newfid 1 0:'test-1765652202838952248' 
I1213 18:56:44.733080   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rwalk tag 0 (4431c 19128553 '') 
I1213 18:56:44.733266   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 1
I1213 18:56:44.733313   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('test-1765652202838952248' 'jenkins' 'jenkins' '' q (4431c 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:44.733454   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Twalk tag 0 fid 1 newfid 2 
I1213 18:56:44.733495   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rwalk tag 0 
I1213 18:56:44.733624   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Topen tag 0 fid 2 mode 0
I1213 18:56:44.733669   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Ropen tag 0 qid (4431c 19128553 '') iounit 0
I1213 18:56:44.733822   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 1
I1213 18:56:44.733862   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('test-1765652202838952248' 'jenkins' 'jenkins' '' q (4431c 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:44.734024   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tread tag 0 fid 2 offset 0 count 262120
I1213 18:56:44.734072   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rread tag 0 count 24
I1213 18:56:44.734219   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tread tag 0 fid 2 offset 24 count 262120
I1213 18:56:44.734243   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rread tag 0 count 0
I1213 18:56:44.734409   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tread tag 0 fid 2 offset 24 count 262120
I1213 18:56:44.734458   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rread tag 0 count 0
I1213 18:56:44.734623   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tclunk tag 0 fid 2
I1213 18:56:44.734657   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rclunk tag 0
I1213 18:56:44.734817   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tclunk tag 0 fid 1
I1213 18:56:44.734840   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rclunk tag 0
I1213 18:56:45.077664   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 0
I1213 18:56:45.077749   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44319 19128553 'd') m d775 at 0 mt 1765652202 l 4096 t 0 d 0 ext )
I1213 18:56:45.078415   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Twalk tag 0 fid 0 newfid 1 
I1213 18:56:45.078487   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rwalk tag 0 
I1213 18:56:45.078607   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Topen tag 0 fid 1 mode 0
I1213 18:56:45.078666   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Ropen tag 0 qid (44319 19128553 'd') iounit 0
I1213 18:56:45.078775   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 0
I1213 18:56:45.078809   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44319 19128553 'd') m d775 at 0 mt 1765652202 l 4096 t 0 d 0 ext )
I1213 18:56:45.079094   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tread tag 0 fid 1 offset 0 count 262120
I1213 18:56:45.079266   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rread tag 0 count 258
I1213 18:56:45.079502   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tread tag 0 fid 1 offset 258 count 261862
I1213 18:56:45.079537   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rread tag 0 count 0
I1213 18:56:45.079992   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tread tag 0 fid 1 offset 258 count 262120
I1213 18:56:45.080054   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rread tag 0 count 0
I1213 18:56:45.080258   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 18:56:45.080307   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rwalk tag 0 (4431a 19128553 '') 
I1213 18:56:45.080691   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 2
I1213 18:56:45.080770   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431a 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:45.081061   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 2
I1213 18:56:45.081121   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431a 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:45.081360   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tclunk tag 0 fid 2
I1213 18:56:45.081390   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rclunk tag 0
I1213 18:56:45.081967   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Twalk tag 0 fid 0 newfid 2 0:'test-1765652202838952248' 
I1213 18:56:45.082263   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rwalk tag 0 (4431c 19128553 '') 
I1213 18:56:45.082525   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 2
I1213 18:56:45.082579   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('test-1765652202838952248' 'jenkins' 'jenkins' '' q (4431c 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:45.082968   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 2
I1213 18:56:45.083026   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('test-1765652202838952248' 'jenkins' 'jenkins' '' q (4431c 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:45.083532   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tclunk tag 0 fid 2
I1213 18:56:45.083569   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rclunk tag 0
I1213 18:56:45.084100   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 18:56:45.084193   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rwalk tag 0 (4431b 19128553 '') 
I1213 18:56:45.084415   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 2
I1213 18:56:45.084488   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431b 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:45.084694   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tstat tag 0 fid 2
I1213 18:56:45.084759   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431b 19128553 '') m 644 at 0 mt 1765652202 l 24 t 0 d 0 ext )
I1213 18:56:45.084927   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tclunk tag 0 fid 2
I1213 18:56:45.084952   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rclunk tag 0
I1213 18:56:45.085161   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tread tag 0 fid 1 offset 258 count 262120
I1213 18:56:45.085219   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rread tag 0 count 0
I1213 18:56:45.085422   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tclunk tag 0 fid 1
I1213 18:56:45.085467   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rclunk tag 0
I1213 18:56:45.086680   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1213 18:56:45.086776   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rerror tag 0 ename 'file not found' ecode 0
I1213 18:56:45.459886   61844 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44620 Tclunk tag 0 fid 0
I1213 18:56:45.459936   61844 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44620 Rclunk tag 0
I1213 18:56:45.461204   61844 main.go:127] stdlog: ufs.go:147 disconnected
I1213 18:56:45.483981   61844 out.go:179] * Unmounting /mount-9p ...
I1213 18:56:45.487003   61844 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 18:56:45.494547   61844 mount.go:180] unmount for /mount-9p ran successfully
I1213 18:56:45.494660   61844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/.mount-process: {Name:mkf2f29f9b5ef9f5eda4965da679c17ffeadb96b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 18:56:45.497846   61844 out.go:203] 
W1213 18:56:45.500809   61844 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1213 18:56:45.503709   61844 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (478.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1213 19:09:44.920550    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:09:45.767360    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:11:42.459357    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:14:28.005857    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:14:44.920743    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:14:45.766952    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-605114 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 105 (7m51.692248396s)

                                                
                                                
-- stdout --
	* [ha-605114] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-605114" primary control-plane node in "ha-605114" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	* Enabled addons: 
	
	* Starting "ha-605114-m02" control-plane node in "ha-605114" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:07:47.349427   92925 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:07:47.349751   92925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:07:47.349782   92925 out.go:374] Setting ErrFile to fd 2...
	I1213 19:07:47.349805   92925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:07:47.350088   92925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:07:47.350503   92925 out.go:368] Setting JSON to false
	I1213 19:07:47.351372   92925 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6620,"bootTime":1765646248,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 19:07:47.351472   92925 start.go:143] virtualization:  
	I1213 19:07:47.357175   92925 out.go:179] * [ha-605114] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 19:07:47.360285   92925 notify.go:221] Checking for updates...
	I1213 19:07:47.363188   92925 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 19:07:47.366066   92925 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:07:47.368997   92925 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:47.371939   92925 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 19:07:47.374564   92925 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 19:07:47.377424   92925 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:07:47.380815   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:47.381472   92925 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 19:07:47.411852   92925 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 19:07:47.411970   92925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:07:47.470115   92925 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-13 19:07:47.460445366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:07:47.470224   92925 docker.go:319] overlay module found
	I1213 19:07:47.473192   92925 out.go:179] * Using the docker driver based on existing profile
	I1213 19:07:47.475964   92925 start.go:309] selected driver: docker
	I1213 19:07:47.475980   92925 start.go:927] validating driver "docker" against &{Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:47.476125   92925 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:07:47.476235   92925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:07:47.532110   92925 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-13 19:07:47.522555398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:07:47.532550   92925 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:07:47.532582   92925 cni.go:84] Creating CNI manager for ""
	I1213 19:07:47.532636   92925 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1213 19:07:47.532689   92925 start.go:353] cluster config:
	{Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:47.537457   92925 out.go:179] * Starting "ha-605114" primary control-plane node in "ha-605114" cluster
	I1213 19:07:47.540151   92925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:07:47.542975   92925 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:07:47.545679   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:47.545731   92925 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 19:07:47.545743   92925 cache.go:65] Caching tarball of preloaded images
	I1213 19:07:47.545753   92925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:07:47.545828   92925 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:07:47.545838   92925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 19:07:47.545971   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:47.565319   92925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:07:47.565343   92925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:07:47.565364   92925 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:07:47.565392   92925 start.go:360] acquireMachinesLock for ha-605114: {Name:mk8d2cbed975abcdd5664438df80622381a361a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:07:47.565456   92925 start.go:364] duration metric: took 41.903µs to acquireMachinesLock for "ha-605114"
	I1213 19:07:47.565477   92925 start.go:96] Skipping create...Using existing machine configuration
	I1213 19:07:47.565483   92925 fix.go:54] fixHost starting: 
	I1213 19:07:47.565741   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:07:47.581688   92925 fix.go:112] recreateIfNeeded on ha-605114: state=Stopped err=<nil>
	W1213 19:07:47.581717   92925 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 19:07:47.584947   92925 out.go:252] * Restarting existing docker container for "ha-605114" ...
	I1213 19:07:47.585046   92925 cli_runner.go:164] Run: docker start ha-605114
	I1213 19:07:47.865372   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:07:47.883933   92925 kic.go:430] container "ha-605114" state is running.
	I1213 19:07:47.884352   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:47.906511   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:47.906746   92925 machine.go:94] provisionDockerMachine start ...
	I1213 19:07:47.906805   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:47.930498   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:47.930829   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:47.930842   92925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:07:47.931376   92925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46728->127.0.0.1:32833: read: connection reset by peer
	I1213 19:07:51.084950   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114
	
	I1213 19:07:51.084978   92925 ubuntu.go:182] provisioning hostname "ha-605114"
	I1213 19:07:51.085064   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.103183   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.103509   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.103523   92925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-605114 && echo "ha-605114" | sudo tee /etc/hostname
	I1213 19:07:51.262962   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114
	
	I1213 19:07:51.263080   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.281758   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.282067   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.282093   92925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-605114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-605114/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-605114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:07:51.433225   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:07:51.433251   92925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:07:51.433276   92925 ubuntu.go:190] setting up certificates
	I1213 19:07:51.433294   92925 provision.go:84] configureAuth start
	I1213 19:07:51.433356   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:51.451056   92925 provision.go:143] copyHostCerts
	I1213 19:07:51.451109   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:51.451157   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:07:51.451169   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:51.451244   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:07:51.451330   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:51.451351   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:07:51.451359   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:51.451387   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:07:51.451438   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:51.451459   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:07:51.451473   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:51.451505   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:07:51.451557   92925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.ha-605114 san=[127.0.0.1 192.168.49.2 ha-605114 localhost minikube]
	I1213 19:07:51.562646   92925 provision.go:177] copyRemoteCerts
	I1213 19:07:51.562709   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:07:51.562753   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.579816   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:51.684734   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 19:07:51.684815   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:07:51.703545   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 19:07:51.703625   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1213 19:07:51.721319   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 19:07:51.721382   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 19:07:51.738806   92925 provision.go:87] duration metric: took 305.496623ms to configureAuth
	I1213 19:07:51.738832   92925 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:07:51.739059   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:51.739152   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.756183   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.756478   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.756493   92925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:07:52.176419   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:07:52.176439   92925 machine.go:97] duration metric: took 4.269683244s to provisionDockerMachine
	I1213 19:07:52.176449   92925 start.go:293] postStartSetup for "ha-605114" (driver="docker")
	I1213 19:07:52.176460   92925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:07:52.176518   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:07:52.176563   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.201857   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.305092   92925 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:07:52.308224   92925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:07:52.308251   92925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:07:52.308263   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:07:52.308316   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:07:52.308413   92925 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:07:52.308423   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 19:07:52.308523   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:07:52.315982   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:07:52.333023   92925 start.go:296] duration metric: took 156.543018ms for postStartSetup
	I1213 19:07:52.333100   92925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:07:52.333150   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.353818   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.454237   92925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:07:52.459167   92925 fix.go:56] duration metric: took 4.893676995s for fixHost
	I1213 19:07:52.459203   92925 start.go:83] releasing machines lock for "ha-605114", held for 4.893726932s
	I1213 19:07:52.459271   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:52.475811   92925 ssh_runner.go:195] Run: cat /version.json
	I1213 19:07:52.475832   92925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:07:52.475868   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.475886   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.494277   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.499565   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.694122   92925 ssh_runner.go:195] Run: systemctl --version
	I1213 19:07:52.700676   92925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:07:52.737939   92925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:07:52.742564   92925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:07:52.742632   92925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:07:52.750413   92925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 19:07:52.750438   92925 start.go:496] detecting cgroup driver to use...
	I1213 19:07:52.750469   92925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:07:52.750516   92925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:07:52.765290   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:07:52.779600   92925 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:07:52.779718   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:07:52.795802   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:07:52.809441   92925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:07:52.921383   92925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:07:53.050247   92925 docker.go:234] disabling docker service ...
	I1213 19:07:53.050357   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:07:53.065412   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:07:53.078985   92925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:07:53.197041   92925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:07:53.312016   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:07:53.324873   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:07:53.338465   92925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:07:53.338566   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.348165   92925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:07:53.348244   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.357334   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.366113   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.375030   92925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:07:53.383092   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.392159   92925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.400500   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.409475   92925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:07:53.416937   92925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:07:53.424427   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:07:53.551020   92925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:07:53.724377   92925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:07:53.724453   92925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:07:53.728412   92925 start.go:564] Will wait 60s for crictl version
	I1213 19:07:53.728528   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:07:53.732393   92925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:07:53.759934   92925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:07:53.760022   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:07:53.792422   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:07:53.826233   92925 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 19:07:53.829188   92925 cli_runner.go:164] Run: docker network inspect ha-605114 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:07:53.845641   92925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:07:53.849708   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:07:53.860398   92925 kubeadm.go:884] updating cluster {Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:07:53.860545   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:53.860602   92925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:07:53.896899   92925 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:07:53.896925   92925 crio.go:433] Images already preloaded, skipping extraction
	I1213 19:07:53.896980   92925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:07:53.927660   92925 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:07:53.927686   92925 cache_images.go:86] Images are preloaded, skipping loading
	I1213 19:07:53.927694   92925 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 19:07:53.927835   92925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-605114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:07:53.927943   92925 ssh_runner.go:195] Run: crio config
	I1213 19:07:53.983293   92925 cni.go:84] Creating CNI manager for ""
	I1213 19:07:53.983320   92925 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1213 19:07:53.983344   92925 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 19:07:53.983367   92925 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-605114 NodeName:ha-605114 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:07:53.983512   92925 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-605114"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:07:53.983533   92925 kube-vip.go:115] generating kube-vip config ...
	I1213 19:07:53.983586   92925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1213 19:07:53.998146   92925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:07:53.998359   92925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 19:07:53.998456   92925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 19:07:54.007466   92925 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:07:54.007601   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1213 19:07:54.016257   92925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1213 19:07:54.030166   92925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:07:54.043943   92925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1213 19:07:54.057568   92925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1213 19:07:54.070913   92925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1213 19:07:54.074912   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:07:54.085321   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:07:54.204815   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:07:54.219656   92925 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114 for IP: 192.168.49.2
	I1213 19:07:54.219678   92925 certs.go:195] generating shared ca certs ...
	I1213 19:07:54.219703   92925 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.219837   92925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:07:54.219890   92925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:07:54.219904   92925 certs.go:257] generating profile certs ...
	I1213 19:07:54.219983   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key
	I1213 19:07:54.220016   92925 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc
	I1213 19:07:54.220035   92925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1213 19:07:54.524208   92925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc ...
	I1213 19:07:54.524279   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc: {Name:mk2a78acb3455aba2154553b94cc00acb06ef2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.524506   92925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc ...
	I1213 19:07:54.524551   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc: {Name:mk04e3ed8a0db9ab16dbffd5c3b9073d491094e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.524690   92925 certs.go:382] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt
	I1213 19:07:54.524872   92925 certs.go:386] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key
	I1213 19:07:54.525075   92925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key
	I1213 19:07:54.525118   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 19:07:54.525152   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 19:07:54.525194   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 19:07:54.525228   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 19:07:54.525260   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 19:07:54.525307   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 19:07:54.525343   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 19:07:54.525371   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 19:07:54.525461   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:07:54.525519   92925 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:07:54.525567   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:07:54.525619   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:07:54.525684   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:07:54.525769   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:07:54.525903   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:07:54.525966   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.526009   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.526041   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.526676   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:07:54.547219   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:07:54.566530   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:07:54.584290   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:07:54.601920   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 19:07:54.619619   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:07:54.637359   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:07:54.654838   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:07:54.674423   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:07:54.692475   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:07:54.711269   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:07:54.730584   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:07:54.744548   92925 ssh_runner.go:195] Run: openssl version
	I1213 19:07:54.750950   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.759097   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:07:54.766678   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.770469   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.770573   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.811925   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:07:54.820248   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.829596   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:07:54.843944   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.848466   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.848527   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.910394   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:07:54.922018   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.934942   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:07:54.943147   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.953686   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.953799   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:07:55.020871   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:07:55.034570   92925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:07:55.045312   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 19:07:55.146347   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 19:07:55.197938   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 19:07:55.240888   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 19:07:55.293579   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 19:07:55.349397   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 19:07:55.405749   92925 kubeadm.go:401] StartCluster: {Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:55.405941   92925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:07:55.406039   92925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:07:55.476432   92925 cri.go:89] found id: "23b44f60db0dc9ad888430163cce4adc2cef45e4fff10aded1fd37e36e5d5955"
	I1213 19:07:55.476492   92925 cri.go:89] found id: "9a81ddd488bb7e9ca9d20cc8af4e9414463f3bf2bd40edd26c2e9395f731a3ec"
	I1213 19:07:55.476519   92925 cri.go:89] found id: "ee202abc8dba3b97ac56d7c3063ce4fae0734134ba47b9d6070588c897f7baf0"
	I1213 19:07:55.476536   92925 cri.go:89] found id: "3c729bb1538bfb45bc9b5542f5524916c96b118344d2be8a42e58a0bc6d4cb0d"
	I1213 19:07:55.476570   92925 cri.go:89] found id: "2b3744a5aa7a90a9d9036f0de528d8ed7e951f80254fa43fd57f666e0a6ccc86"
	I1213 19:07:55.476591   92925 cri.go:89] found id: ""
	I1213 19:07:55.476674   92925 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 19:07:55.502827   92925 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T19:07:55Z" level=error msg="open /run/runc: no such file or directory"
	I1213 19:07:55.502965   92925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:07:55.514772   92925 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 19:07:55.514841   92925 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 19:07:55.514932   92925 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 19:07:55.530907   92925 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:07:55.531414   92925 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-605114" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:55.531569   92925 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-2686/kubeconfig needs updating (will repair): [kubeconfig missing "ha-605114" cluster setting kubeconfig missing "ha-605114" context setting]
	I1213 19:07:55.531908   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.532529   92925 kapi.go:59] client config for ha-605114: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 19:07:55.533545   92925 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 19:07:55.533623   92925 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 19:07:55.533709   92925 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 19:07:55.533743   92925 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 19:07:55.533762   92925 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 19:07:55.533784   92925 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 19:07:55.534156   92925 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 19:07:55.550155   92925 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 19:07:55.550227   92925 kubeadm.go:602] duration metric: took 35.349185ms to restartPrimaryControlPlane
	I1213 19:07:55.550251   92925 kubeadm.go:403] duration metric: took 144.511847ms to StartCluster
	I1213 19:07:55.550281   92925 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.550405   92925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:55.551146   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.551412   92925 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:07:55.551467   92925 start.go:242] waiting for startup goroutines ...
	I1213 19:07:55.551494   92925 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 19:07:55.552092   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:55.557393   92925 out.go:179] * Enabled addons: 
	I1213 19:07:55.560282   92925 addons.go:530] duration metric: took 8.786078ms for enable addons: enabled=[]
	I1213 19:07:55.560370   92925 start.go:247] waiting for cluster config update ...
	I1213 19:07:55.560416   92925 start.go:256] writing updated cluster config ...
	I1213 19:07:55.563604   92925 out.go:203] 
	I1213 19:07:55.566673   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:55.566871   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:55.570151   92925 out.go:179] * Starting "ha-605114-m02" control-plane node in "ha-605114" cluster
	I1213 19:07:55.572987   92925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:07:55.575841   92925 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:07:55.578800   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:55.578823   92925 cache.go:65] Caching tarball of preloaded images
	I1213 19:07:55.578933   92925 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:07:55.578943   92925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 19:07:55.579063   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:55.579269   92925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:07:55.599207   92925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:07:55.599233   92925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:07:55.599247   92925 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:07:55.599269   92925 start.go:360] acquireMachinesLock for ha-605114-m02: {Name:mk43db0c2b2ac44e0e8dc9a68aa6922f0bb2fccb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:07:55.599325   92925 start.go:364] duration metric: took 36.989µs to acquireMachinesLock for "ha-605114-m02"
	I1213 19:07:55.599348   92925 start.go:96] Skipping create...Using existing machine configuration
	I1213 19:07:55.599358   92925 fix.go:54] fixHost starting: m02
	I1213 19:07:55.599613   92925 cli_runner.go:164] Run: docker container inspect ha-605114-m02 --format={{.State.Status}}
	I1213 19:07:55.630999   92925 fix.go:112] recreateIfNeeded on ha-605114-m02: state=Stopped err=<nil>
	W1213 19:07:55.631030   92925 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 19:07:55.634239   92925 out.go:252] * Restarting existing docker container for "ha-605114-m02" ...
	I1213 19:07:55.634323   92925 cli_runner.go:164] Run: docker start ha-605114-m02
	I1213 19:07:56.013613   92925 cli_runner.go:164] Run: docker container inspect ha-605114-m02 --format={{.State.Status}}
	I1213 19:07:56.043229   92925 kic.go:430] container "ha-605114-m02" state is running.
	I1213 19:07:56.043952   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:07:56.072863   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:56.073198   92925 machine.go:94] provisionDockerMachine start ...
	I1213 19:07:56.073260   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:56.107315   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:56.107694   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:56.107711   92925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:07:56.108441   92925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 19:07:59.320519   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114-m02
	
	I1213 19:07:59.320540   92925 ubuntu.go:182] provisioning hostname "ha-605114-m02"
	I1213 19:07:59.320600   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.354148   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:59.354465   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:59.354476   92925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-605114-m02 && echo "ha-605114-m02" | sudo tee /etc/hostname
	I1213 19:07:59.560753   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114-m02
	
	I1213 19:07:59.560835   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.590681   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:59.590982   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:59.590997   92925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-605114-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-605114-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-605114-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:07:59.777428   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:07:59.777502   92925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:07:59.777532   92925 ubuntu.go:190] setting up certificates
	I1213 19:07:59.777573   92925 provision.go:84] configureAuth start
	I1213 19:07:59.777669   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:07:59.806547   92925 provision.go:143] copyHostCerts
	I1213 19:07:59.806589   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:59.806621   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:07:59.806628   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:59.806709   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:07:59.806788   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:59.806805   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:07:59.806810   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:59.806854   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:07:59.806898   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:59.806916   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:07:59.806920   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:59.806944   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:07:59.806989   92925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.ha-605114-m02 san=[127.0.0.1 192.168.49.3 ha-605114-m02 localhost minikube]
	I1213 19:07:59.961185   92925 provision.go:177] copyRemoteCerts
	I1213 19:07:59.961261   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:07:59.961306   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.986810   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:00.131955   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 19:08:00.132032   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:08:00.173539   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 19:08:00.173623   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 19:08:00.207894   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 19:08:00.207965   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 19:08:00.244666   92925 provision.go:87] duration metric: took 467.054938ms to configureAuth
	I1213 19:08:00.244712   92925 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:08:00.245918   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:08:00.246082   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:00.327171   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:08:00.327492   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:08:00.327508   92925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:08:01.970074   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:08:01.970150   92925 machine.go:97] duration metric: took 5.896940025s to provisionDockerMachine
	I1213 19:08:01.970177   92925 start.go:293] postStartSetup for "ha-605114-m02" (driver="docker")
	I1213 19:08:01.970221   92925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:08:01.970316   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:08:01.970411   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.009089   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.129494   92925 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:08:02.136549   92925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:08:02.136573   92925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:08:02.136585   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:08:02.136646   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:08:02.136728   92925 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:08:02.136734   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 19:08:02.136842   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:08:02.171248   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:08:02.216469   92925 start.go:296] duration metric: took 246.261152ms for postStartSetup
	I1213 19:08:02.216625   92925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:08:02.216685   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.262639   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.374718   92925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:08:02.380084   92925 fix.go:56] duration metric: took 6.780718951s for fixHost
	I1213 19:08:02.380108   92925 start.go:83] releasing machines lock for "ha-605114-m02", held for 6.780770726s
	I1213 19:08:02.380176   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:08:02.401071   92925 out.go:179] * Found network options:
	I1213 19:08:02.404164   92925 out.go:179]   - NO_PROXY=192.168.49.2
	W1213 19:08:02.407079   92925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1213 19:08:02.407127   92925 proxy.go:120] fail to check proxy env: Error ip not in block
	I1213 19:08:02.407198   92925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:08:02.407241   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.407257   92925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:08:02.407313   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.441677   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.462715   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.700903   92925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:08:02.788606   92925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:08:02.788680   92925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:08:02.802406   92925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 19:08:02.802471   92925 start.go:496] detecting cgroup driver to use...
	I1213 19:08:02.802520   92925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:08:02.802599   92925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:08:02.821557   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:08:02.843971   92925 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:08:02.844081   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:08:02.866953   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:08:02.884909   92925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:08:03.137948   92925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:08:03.363884   92925 docker.go:234] disabling docker service ...
	I1213 19:08:03.363990   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:08:03.388880   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:08:03.405597   92925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:08:03.645933   92925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:08:03.919704   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:08:03.941774   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:08:03.972913   92925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:08:03.973103   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:03.988083   92925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:08:03.988256   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.019667   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.031645   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.049709   92925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:08:04.086713   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.109181   92925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.119963   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.154436   92925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:08:04.170086   92925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:08:04.191001   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:08:04.484381   92925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:09:34.781930   92925 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.297515083s)
	I1213 19:09:34.781956   92925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:09:34.782006   92925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:09:34.785743   92925 start.go:564] Will wait 60s for crictl version
	I1213 19:09:34.785812   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:09:34.789353   92925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:09:34.818524   92925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:09:34.818612   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:09:34.852441   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:09:34.887257   92925 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 19:09:34.890293   92925 out.go:179]   - env NO_PROXY=192.168.49.2
	I1213 19:09:34.893426   92925 cli_runner.go:164] Run: docker network inspect ha-605114 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:09:34.911684   92925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:09:34.915601   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:09:34.925402   92925 mustload.go:66] Loading cluster: ha-605114
	I1213 19:09:34.925637   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:09:34.925900   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:09:34.944458   92925 host.go:66] Checking if "ha-605114" exists ...
	I1213 19:09:34.944731   92925 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114 for IP: 192.168.49.3
	I1213 19:09:34.944745   92925 certs.go:195] generating shared ca certs ...
	I1213 19:09:34.944760   92925 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:09:34.944889   92925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:09:34.944944   92925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:09:34.944957   92925 certs.go:257] generating profile certs ...
	I1213 19:09:34.945069   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key
	I1213 19:09:34.945157   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.29c07aea
	I1213 19:09:34.945202   92925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key
	I1213 19:09:34.945215   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 19:09:34.945230   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 19:09:34.945254   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 19:09:34.945266   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 19:09:34.945281   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 19:09:34.945294   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 19:09:34.945309   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 19:09:34.945328   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 19:09:34.945383   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:09:34.945424   92925 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:09:34.945446   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:09:34.945479   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:09:34.945508   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:09:34.945538   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:09:34.945583   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:09:34.945616   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:34.945632   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 19:09:34.945649   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 19:09:34.945719   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:09:34.963328   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:09:35.065324   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1213 19:09:35.069081   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1213 19:09:35.077819   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1213 19:09:35.081455   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1213 19:09:35.089763   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1213 19:09:35.093612   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1213 19:09:35.102260   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1213 19:09:35.106728   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1213 19:09:35.115519   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1213 19:09:35.119196   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1213 19:09:35.129001   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1213 19:09:35.132624   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1213 19:09:35.141653   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:09:35.161897   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:09:35.182131   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:09:35.202060   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:09:35.222310   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 19:09:35.243497   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:09:35.265517   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:09:35.284987   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:09:35.302971   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:09:35.320388   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:09:35.338865   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:09:35.356332   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1213 19:09:35.369616   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1213 19:09:35.383108   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1213 19:09:35.396928   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1213 19:09:35.410529   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1213 19:09:35.423162   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1213 19:09:35.436667   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1213 19:09:35.450451   92925 ssh_runner.go:195] Run: openssl version
	I1213 19:09:35.457142   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.464516   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:09:35.472169   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.475920   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.475984   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.516956   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:09:35.524426   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.532136   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:09:35.539767   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.543798   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.543906   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.586837   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:09:35.594791   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.602550   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:09:35.610984   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.614895   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.614973   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.661484   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:09:35.668847   92925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:09:35.672924   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 19:09:35.714926   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 19:09:35.757278   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 19:09:35.798060   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 19:09:35.840340   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 19:09:35.883228   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 19:09:35.926498   92925 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1213 19:09:35.926597   92925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-605114-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:09:35.926628   92925 kube-vip.go:115] generating kube-vip config ...
	I1213 19:09:35.926680   92925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1213 19:09:35.939407   92925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:09:35.939464   92925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 19:09:35.939538   92925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 19:09:35.948342   92925 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:09:35.948446   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1213 19:09:35.956523   92925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 19:09:35.970227   92925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:09:35.985384   92925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1213 19:09:36.004385   92925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1213 19:09:36.008483   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:09:36.019218   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:09:36.155982   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:09:36.170330   92925 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:09:36.170793   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:09:36.174251   92925 out.go:179] * Verifying Kubernetes components...
	I1213 19:09:36.177213   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:09:36.319740   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:09:36.334811   92925 kapi.go:59] client config for ha-605114: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1213 19:09:36.334886   92925 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1213 19:09:36.335095   92925 node_ready.go:35] waiting up to 6m0s for node "ha-605114-m02" to be "Ready" ...
	I1213 19:09:39.281934   92925 node_ready.go:49] node "ha-605114-m02" is "Ready"
	I1213 19:09:39.281962   92925 node_ready.go:38] duration metric: took 2.946847766s for node "ha-605114-m02" to be "Ready" ...
	I1213 19:09:39.281975   92925 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:09:39.282034   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:39.782149   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:40.282856   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:40.782144   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:41.282958   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:41.782581   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:42.282264   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:42.782257   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:43.283132   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:43.782112   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:44.282168   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:44.782088   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:45.282593   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:45.782122   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:46.282927   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:46.782182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:47.282980   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:47.783112   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:48.282633   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:48.782211   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:49.282732   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:49.782187   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:50.282735   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:50.782142   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:51.282519   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:51.782152   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:52.282197   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:52.782636   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:53.282768   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:53.782116   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:54.282300   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:54.782182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:55.282883   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:55.783092   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:56.282203   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:56.783098   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:57.282717   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:57.782189   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:58.282252   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:58.782909   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:59.282100   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:59.782310   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:00.289145   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:00.782212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:01.282192   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:01.782760   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:02.282108   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:02.782972   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:03.282353   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:03.782328   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:04.282366   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:04.782174   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:05.282835   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:05.782488   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:06.283036   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:06.782436   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:07.282292   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:07.782212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:08.283033   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:08.783070   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:09.282897   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:09.782668   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:10.282222   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:10.782267   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:11.282198   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:11.782837   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:12.282212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:12.783009   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:13.282406   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:13.782556   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:14.283140   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:14.782783   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:15.283077   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:15.783150   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:16.282934   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:16.783092   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:17.282186   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:17.782253   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:18.282771   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:18.782339   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:19.282255   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:19.782254   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:20.282346   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:20.782992   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:21.282270   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:21.782169   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:22.282176   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:22.782681   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:23.282402   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:23.783116   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:24.282118   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:24.782962   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:25.283031   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:25.783024   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:26.283105   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:26.782110   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:27.282833   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:27.782332   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:28.282978   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:28.782284   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:29.283095   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:29.782866   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:30.282438   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:30.782580   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:31.282697   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:31.783148   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:32.283119   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:32.782971   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:33.282108   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:33.783088   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:34.283075   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:34.782667   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:35.282868   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:35.782514   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:36.282200   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:36.282308   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:36.311092   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:36.311117   92925 cri.go:89] found id: ""
	I1213 19:10:36.311125   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:36.311180   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.314888   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:36.314970   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:36.342553   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:36.342573   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:36.342578   92925 cri.go:89] found id: ""
	I1213 19:10:36.342586   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:36.342655   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.346486   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.349986   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:36.350061   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:36.375198   92925 cri.go:89] found id: ""
	I1213 19:10:36.375262   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.375275   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:36.375281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:36.375350   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:36.406767   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:36.406789   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:36.406794   92925 cri.go:89] found id: ""
	I1213 19:10:36.406801   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:36.406857   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.410743   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.414390   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:36.414490   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:36.441810   92925 cri.go:89] found id: ""
	I1213 19:10:36.441833   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.441841   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:36.441848   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:36.441911   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:36.468354   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:36.468374   92925 cri.go:89] found id: ""
	I1213 19:10:36.468382   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:36.468436   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.472238   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:36.472316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:36.500356   92925 cri.go:89] found id: ""
	I1213 19:10:36.500383   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.500394   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:36.500404   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:36.500414   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:36.593811   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:36.593845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:36.607625   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:36.607656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:37.031907   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:37.023726    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.024402    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.025999    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.026604    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.028296    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:37.023726    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.024402    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.025999    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.026604    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.028296    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:37.031933   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:37.031948   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:37.057050   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:37.057079   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:37.097228   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:37.097262   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:37.148963   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:37.149014   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:37.217399   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:37.217436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:37.248174   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:37.248203   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:37.274722   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:37.274748   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:37.355342   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:37.355379   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:39.885413   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:39.896181   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:39.896250   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:39.928054   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:39.928078   92925 cri.go:89] found id: ""
	I1213 19:10:39.928087   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:39.928142   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.932690   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:39.932760   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:39.962089   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:39.962110   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:39.962114   92925 cri.go:89] found id: ""
	I1213 19:10:39.962122   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:39.962178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.966008   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.970141   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:39.970211   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:40.031915   92925 cri.go:89] found id: ""
	I1213 19:10:40.031938   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.031947   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:40.031954   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:40.032022   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:40.075124   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:40.075145   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:40.075150   92925 cri.go:89] found id: ""
	I1213 19:10:40.075157   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:40.075216   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.079588   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.083956   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:40.084077   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:40.120592   92925 cri.go:89] found id: ""
	I1213 19:10:40.120623   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.120633   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:40.120640   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:40.120707   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:40.162573   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:40.162599   92925 cri.go:89] found id: ""
	I1213 19:10:40.162620   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:40.162692   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.167731   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:40.167810   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:40.197646   92925 cri.go:89] found id: ""
	I1213 19:10:40.197681   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.197692   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:40.197701   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:40.197714   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:40.279428   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:40.270096    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.270945    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.271678    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.273521    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.274072    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:40.270096    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.270945    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.271678    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.273521    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.274072    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:40.279462   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:40.279476   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:40.317833   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:40.317867   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:40.365303   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:40.365339   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:40.391972   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:40.392006   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:40.467785   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:40.467824   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:40.499555   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:40.499587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:40.601537   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:40.601571   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:40.614326   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:40.614357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:40.643794   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:40.643823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:40.696205   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:40.696242   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.224045   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:43.234786   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:43.234854   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:43.262459   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:43.262481   92925 cri.go:89] found id: ""
	I1213 19:10:43.262489   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:43.262544   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.267289   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:43.267362   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:43.294825   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:43.294846   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:43.294858   92925 cri.go:89] found id: ""
	I1213 19:10:43.294873   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:43.294931   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.298717   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.302500   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:43.302576   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:43.328978   92925 cri.go:89] found id: ""
	I1213 19:10:43.329001   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.329048   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:43.329055   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:43.329115   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:43.358394   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:43.358419   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.358426   92925 cri.go:89] found id: ""
	I1213 19:10:43.358434   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:43.358544   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.363176   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.366906   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:43.366996   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:43.396556   92925 cri.go:89] found id: ""
	I1213 19:10:43.396583   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.396592   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:43.396598   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:43.396657   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:43.422776   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:43.422803   92925 cri.go:89] found id: ""
	I1213 19:10:43.422813   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:43.422886   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.426512   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:43.426579   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:43.452942   92925 cri.go:89] found id: ""
	I1213 19:10:43.452966   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.452975   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:43.452984   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:43.452996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:43.479637   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:43.479708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:43.492492   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:43.492521   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:43.555898   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:43.555930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.583059   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:43.583089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:43.665528   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:43.665562   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:43.713108   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:43.713136   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:43.817894   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:43.817930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:43.900953   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:43.892916    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.893797    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895356    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895650    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.897247    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:43.892916    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.893797    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895356    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895650    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.897247    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:43.900978   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:43.900992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:43.928040   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:43.928067   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:43.989295   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:43.989349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:46.551759   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:46.562922   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:46.562999   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:46.590576   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:46.590607   92925 cri.go:89] found id: ""
	I1213 19:10:46.590615   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:46.590669   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.594481   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:46.594557   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:46.619444   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:46.619466   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:46.619472   92925 cri.go:89] found id: ""
	I1213 19:10:46.619480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:46.619562   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.623350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.626652   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:46.626726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:46.655019   92925 cri.go:89] found id: ""
	I1213 19:10:46.655045   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.655055   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:46.655061   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:46.655119   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:46.685081   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:46.685108   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:46.685113   92925 cri.go:89] found id: ""
	I1213 19:10:46.685121   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:46.685178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.689664   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.693381   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:46.693455   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:46.719871   92925 cri.go:89] found id: ""
	I1213 19:10:46.719897   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.719906   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:46.719914   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:46.719979   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:46.747153   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:46.747176   92925 cri.go:89] found id: ""
	I1213 19:10:46.747184   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:46.747239   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.751093   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:46.751198   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:46.777729   92925 cri.go:89] found id: ""
	I1213 19:10:46.777800   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.777816   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:46.777827   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:46.777840   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:46.807286   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:46.807315   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:46.900226   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:46.900266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:46.913850   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:46.913877   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:46.995097   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:46.986432    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.987537    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.988185    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.989944    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.990430    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:46.986432    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.987537    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.988185    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.989944    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.990430    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:46.995121   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:46.995146   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:47.020980   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:47.021038   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:47.062312   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:47.062348   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:47.143840   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:47.143916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:47.176420   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:47.176455   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:47.221958   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:47.222003   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:47.276308   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:47.276349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:49.804769   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:49.815535   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:49.815609   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:49.841153   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:49.841227   92925 cri.go:89] found id: ""
	I1213 19:10:49.841258   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:49.841341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.844798   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:49.844903   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:49.872086   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:49.872111   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:49.872117   92925 cri.go:89] found id: ""
	I1213 19:10:49.872124   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:49.872178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.875975   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.879817   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:49.879892   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:49.918961   92925 cri.go:89] found id: ""
	I1213 19:10:49.918987   92925 logs.go:282] 0 containers: []
	W1213 19:10:49.918996   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:49.919002   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:49.919059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:49.959969   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:49.959994   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:49.959999   92925 cri.go:89] found id: ""
	I1213 19:10:49.960007   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:49.960063   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.964635   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.969140   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:49.969208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:50.006023   92925 cri.go:89] found id: ""
	I1213 19:10:50.006049   92925 logs.go:282] 0 containers: []
	W1213 19:10:50.006058   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:50.006064   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:50.006143   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:50.040945   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:50.040965   92925 cri.go:89] found id: ""
	I1213 19:10:50.040973   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:50.041060   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:50.044991   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:50.045100   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:50.073352   92925 cri.go:89] found id: ""
	I1213 19:10:50.073383   92925 logs.go:282] 0 containers: []
	W1213 19:10:50.073409   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:50.073420   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:50.073437   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:50.092169   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:50.092219   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:50.167681   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:50.167719   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:50.220989   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:50.221028   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:50.252059   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:50.252091   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:50.358508   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:50.358555   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:50.434424   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:50.426219    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.426850    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.428449    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.429020    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.430880    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:50.426219    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.426850    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.428449    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.429020    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.430880    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:50.434452   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:50.434467   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:50.458963   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:50.458992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:50.516376   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:50.516410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:50.543978   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:50.544009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:50.619429   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:50.619468   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:53.153421   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:53.163979   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:53.164048   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:53.191198   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:53.191259   92925 cri.go:89] found id: ""
	I1213 19:10:53.191291   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:53.191363   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.195132   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:53.195204   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:53.222253   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:53.222276   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:53.222280   92925 cri.go:89] found id: ""
	I1213 19:10:53.222287   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:53.222370   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.226176   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.229762   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:53.229878   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:53.260062   92925 cri.go:89] found id: ""
	I1213 19:10:53.260088   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.260096   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:53.260103   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:53.260159   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:53.289940   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:53.290005   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:53.290024   92925 cri.go:89] found id: ""
	I1213 19:10:53.290037   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:53.290106   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.293745   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.297116   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:53.297199   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:53.324233   92925 cri.go:89] found id: ""
	I1213 19:10:53.324259   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.324268   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:53.324274   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:53.324329   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:53.355230   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:53.355252   92925 cri.go:89] found id: ""
	I1213 19:10:53.355260   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:53.355312   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.358865   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:53.358932   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:53.388377   92925 cri.go:89] found id: ""
	I1213 19:10:53.388460   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.388486   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:53.388531   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:53.388561   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:53.482197   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:53.482233   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:53.495635   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:53.495666   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:53.527174   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:53.527201   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:53.568473   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:53.568509   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:53.613038   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:53.613068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:53.666213   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:53.666248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:53.746993   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:53.747031   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:53.777726   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:53.777758   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:53.849162   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:53.840835    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.841725    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.842564    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844081    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844396    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:53.840835    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.841725    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.842564    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844081    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844396    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:53.849193   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:53.849207   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:53.879522   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:53.879551   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.408599   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:56.420063   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:56.420130   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:56.446598   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:56.446622   92925 cri.go:89] found id: ""
	I1213 19:10:56.446630   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:56.446691   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.450451   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:56.450519   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:56.477437   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:56.477460   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:56.477465   92925 cri.go:89] found id: ""
	I1213 19:10:56.477472   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:56.477560   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.481341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.484891   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:56.484963   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:56.513437   92925 cri.go:89] found id: ""
	I1213 19:10:56.513459   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.513467   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:56.513473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:56.513531   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:56.542772   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:56.542812   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:56.542818   92925 cri.go:89] found id: ""
	I1213 19:10:56.542845   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:56.542930   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.546773   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.550355   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:56.550430   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:56.577663   92925 cri.go:89] found id: ""
	I1213 19:10:56.577687   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.577695   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:56.577701   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:56.577811   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:56.604755   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.604827   92925 cri.go:89] found id: ""
	I1213 19:10:56.604849   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:56.604945   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.608549   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:56.608618   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:56.635735   92925 cri.go:89] found id: ""
	I1213 19:10:56.635759   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.635767   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:56.635777   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:56.635789   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:56.729353   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:56.729388   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:56.741845   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:56.741874   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:56.815151   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:56.806729    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.807450    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.808916    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.809436    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.811611    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:56.806729    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.807450    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.808916    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.809436    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.811611    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:56.815178   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:56.815193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:56.871711   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:56.871748   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.904003   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:56.904034   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:56.941519   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:56.941549   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:56.974994   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:56.975022   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:57.015259   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:57.015290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:57.059492   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:57.059527   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:57.085661   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:57.085690   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:59.675412   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:59.686117   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:59.686192   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:59.710921   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:59.710951   92925 cri.go:89] found id: ""
	I1213 19:10:59.710960   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:59.711015   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.714894   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:59.715008   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:59.742170   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:59.742193   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:59.742199   92925 cri.go:89] found id: ""
	I1213 19:10:59.742206   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:59.742261   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.746138   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.750866   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:59.750942   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:59.777917   92925 cri.go:89] found id: ""
	I1213 19:10:59.777943   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.777951   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:59.777957   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:59.778015   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:59.803883   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:59.803903   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:59.803908   92925 cri.go:89] found id: ""
	I1213 19:10:59.803916   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:59.803971   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.807903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.811388   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:59.811453   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:59.837952   92925 cri.go:89] found id: ""
	I1213 19:10:59.837977   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.837986   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:59.837992   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:59.838048   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:59.864431   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:59.864490   92925 cri.go:89] found id: ""
	I1213 19:10:59.864512   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:59.864594   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.869272   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:59.869345   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:59.896571   92925 cri.go:89] found id: ""
	I1213 19:10:59.896603   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.896612   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:59.896622   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:59.896634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:59.997222   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:59.997313   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:00.122051   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:00.122166   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:00.334228   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:00.323858    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.324625    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326029    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326896    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.328835    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:00.323858    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.324625    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326029    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326896    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.328835    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:00.334270   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:00.334284   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:00.397345   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:00.397381   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:00.460082   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:00.460118   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:00.507030   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:00.507068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:00.561579   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:00.561611   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:00.590319   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:00.590346   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:00.618590   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:00.618617   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:00.700620   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:00.700655   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:03.247538   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:03.260650   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:03.260720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:03.296710   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:03.296736   92925 cri.go:89] found id: ""
	I1213 19:11:03.296744   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:03.296804   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.300974   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:03.301083   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:03.332989   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:03.333019   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:03.333024   92925 cri.go:89] found id: ""
	I1213 19:11:03.333031   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:03.333085   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.337959   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.341569   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:03.341642   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:03.367805   92925 cri.go:89] found id: ""
	I1213 19:11:03.367831   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.367840   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:03.367847   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:03.367910   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:03.396144   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:03.396165   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:03.396170   92925 cri.go:89] found id: ""
	I1213 19:11:03.396177   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:03.396234   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.400643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.404350   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:03.404422   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:03.431472   92925 cri.go:89] found id: ""
	I1213 19:11:03.431498   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.431508   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:03.431520   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:03.431602   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:03.459968   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:03.460034   92925 cri.go:89] found id: ""
	I1213 19:11:03.460058   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:03.460134   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.464138   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:03.464230   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:03.491871   92925 cri.go:89] found id: ""
	I1213 19:11:03.491897   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.491906   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:03.491916   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:03.491928   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:03.528376   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:03.528451   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:03.562095   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:03.562124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:03.575381   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:03.575410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:03.602586   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:03.602615   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:03.651880   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:03.651912   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:03.708104   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:03.708142   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:03.736240   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:03.736268   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:03.814277   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:03.814314   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:03.920505   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:03.920542   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:04.025281   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:04.014467    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.015603    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.016913    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.017960    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.019083    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:04.014467    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.015603    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.016913    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.017960    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.019083    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:04.025308   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:04.025326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.584492   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:06.595822   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:06.595900   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:06.627891   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:06.627917   92925 cri.go:89] found id: ""
	I1213 19:11:06.627925   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:06.627982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.632107   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:06.632184   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:06.657896   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:06.657921   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.657926   92925 cri.go:89] found id: ""
	I1213 19:11:06.657934   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:06.657989   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.661493   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.665545   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:06.665611   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:06.696673   92925 cri.go:89] found id: ""
	I1213 19:11:06.696748   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.696773   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:06.696792   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:06.696879   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:06.724330   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:06.724355   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:06.724360   92925 cri.go:89] found id: ""
	I1213 19:11:06.724368   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:06.724422   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.728040   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.731506   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:06.731610   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:06.756515   92925 cri.go:89] found id: ""
	I1213 19:11:06.756578   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.756601   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:06.756622   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:06.756700   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:06.783035   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:06.783094   92925 cri.go:89] found id: ""
	I1213 19:11:06.783117   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:06.783184   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.787082   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:06.787158   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:06.813991   92925 cri.go:89] found id: ""
	I1213 19:11:06.814014   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.814022   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:06.814031   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:06.814043   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.860023   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:06.860057   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:06.915266   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:06.915303   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:07.005436   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:07.005480   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:07.041558   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:07.041591   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:07.055111   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:07.055140   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:07.085506   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:07.085534   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:07.140042   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:07.140080   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:07.170267   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:07.170300   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:07.197645   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:07.197676   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:07.298125   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:07.298167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:07.368495   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:07.358879    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.359581    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361161    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361458    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.363677    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:07.358879    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.359581    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361161    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361458    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.363677    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:09.868760   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:09.879760   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:09.879831   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:09.907241   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:09.907264   92925 cri.go:89] found id: ""
	I1213 19:11:09.907272   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:09.907331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.910883   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:09.910954   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:09.936137   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:09.936156   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:09.936161   92925 cri.go:89] found id: ""
	I1213 19:11:09.936167   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:09.936222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.940048   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.951154   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:09.951222   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:09.985435   92925 cri.go:89] found id: ""
	I1213 19:11:09.985520   92925 logs.go:282] 0 containers: []
	W1213 19:11:09.985532   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:09.985540   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:09.985648   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:10.028412   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:10.028487   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:10.028521   92925 cri.go:89] found id: ""
	I1213 19:11:10.028549   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:10.028643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.035436   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.040716   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:10.040834   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:10.070216   92925 cri.go:89] found id: ""
	I1213 19:11:10.070245   92925 logs.go:282] 0 containers: []
	W1213 19:11:10.070255   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:10.070261   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:10.070323   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:10.107151   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:10.107174   92925 cri.go:89] found id: ""
	I1213 19:11:10.107183   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:10.107241   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.111700   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:10.111773   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:10.148889   92925 cri.go:89] found id: ""
	I1213 19:11:10.148913   92925 logs.go:282] 0 containers: []
	W1213 19:11:10.148922   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:10.148931   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:10.148946   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:10.183850   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:10.183953   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:10.284535   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:10.284572   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:10.361456   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:10.353378    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.354229    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.355719    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.356209    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.357653    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:10.353378    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.354229    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.355719    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.356209    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.357653    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:10.361521   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:10.361543   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:10.401195   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:10.401230   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:10.466771   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:10.466806   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:10.492988   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:10.493041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:10.506114   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:10.506143   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:10.534614   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:10.534643   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:10.589313   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:10.589346   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:10.621617   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:10.621646   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:13.202940   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:13.214007   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:13.214076   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:13.241311   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:13.241334   92925 cri.go:89] found id: ""
	I1213 19:11:13.241342   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:13.241399   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.244857   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:13.244973   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:13.271246   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:13.271272   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:13.271277   92925 cri.go:89] found id: ""
	I1213 19:11:13.271284   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:13.271368   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.275204   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.278868   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:13.278941   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:13.306334   92925 cri.go:89] found id: ""
	I1213 19:11:13.306365   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.306373   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:13.306379   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:13.306440   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:13.332388   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:13.332407   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:13.332412   92925 cri.go:89] found id: ""
	I1213 19:11:13.332419   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:13.332474   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.336618   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.340235   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:13.340305   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:13.366487   92925 cri.go:89] found id: ""
	I1213 19:11:13.366522   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.366531   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:13.366537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:13.366597   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:13.397475   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:13.397496   92925 cri.go:89] found id: ""
	I1213 19:11:13.397504   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:13.397565   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.401266   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:13.401377   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:13.430168   92925 cri.go:89] found id: ""
	I1213 19:11:13.430196   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.430205   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:13.430221   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:13.430235   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:13.496086   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:13.486609    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.487472    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489304    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489961    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.491916    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:13.486609    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.487472    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489304    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489961    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.491916    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:13.496111   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:13.496124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:13.548378   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:13.548413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:13.601861   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:13.601899   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:13.634165   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:13.634193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:13.662242   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:13.662270   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:13.737810   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:13.737846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:13.770540   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:13.770574   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:13.783830   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:13.783907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:13.810122   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:13.810149   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:13.856452   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:13.856485   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:16.448594   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:16.459829   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:16.459900   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:16.489717   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:16.489737   92925 cri.go:89] found id: ""
	I1213 19:11:16.489745   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:16.489799   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.494205   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:16.494290   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:16.529314   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:16.529336   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:16.529340   92925 cri.go:89] found id: ""
	I1213 19:11:16.529349   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:16.529404   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.533136   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.536814   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:16.536887   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:16.563026   92925 cri.go:89] found id: ""
	I1213 19:11:16.563064   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.563073   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:16.563079   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:16.563139   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:16.594519   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:16.594541   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:16.594546   92925 cri.go:89] found id: ""
	I1213 19:11:16.594554   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:16.594611   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.598288   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.601875   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:16.601946   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:16.628577   92925 cri.go:89] found id: ""
	I1213 19:11:16.628603   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.628612   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:16.628618   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:16.628676   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:16.656978   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:16.657001   92925 cri.go:89] found id: ""
	I1213 19:11:16.657039   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:16.657095   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.661124   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:16.661236   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:16.695697   92925 cri.go:89] found id: ""
	I1213 19:11:16.695731   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.695739   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:16.695748   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:16.695760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:16.766672   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:16.757776    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.758599    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760229    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760563    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.762386    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:16.757776    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.758599    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760229    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760563    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.762386    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:16.766696   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:16.766709   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:16.808187   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:16.808237   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:16.850027   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:16.850062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:16.906135   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:16.906174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:16.935630   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:16.935661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:16.963433   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:16.963463   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:17.045818   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:17.045852   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:17.079053   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:17.079080   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:17.186217   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:17.186251   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:17.198725   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:17.198760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:19.727394   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:19.738364   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:19.738431   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:19.768160   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:19.768183   92925 cri.go:89] found id: ""
	I1213 19:11:19.768196   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:19.768252   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.772004   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:19.772128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:19.799342   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:19.799368   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:19.799374   92925 cri.go:89] found id: ""
	I1213 19:11:19.799382   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:19.799466   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.803455   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.807247   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:19.807340   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:19.835979   92925 cri.go:89] found id: ""
	I1213 19:11:19.836005   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.836014   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:19.836021   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:19.836081   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:19.864302   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:19.864325   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:19.864331   92925 cri.go:89] found id: ""
	I1213 19:11:19.864338   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:19.864397   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.868104   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.871725   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:19.871812   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:19.899890   92925 cri.go:89] found id: ""
	I1213 19:11:19.899919   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.899937   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:19.899944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:19.900012   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:19.927600   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:19.927624   92925 cri.go:89] found id: ""
	I1213 19:11:19.927632   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:19.927685   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.931424   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:19.931509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:19.961424   92925 cri.go:89] found id: ""
	I1213 19:11:19.961454   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.961469   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:19.961479   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:19.961492   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:20.002155   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:20.002284   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:20.082123   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:20.071968    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.072791    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.075159    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.076013    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.077851    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:20.071968    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.072791    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.075159    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.076013    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.077851    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:20.082148   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:20.082162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:20.127578   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:20.127614   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:20.174673   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:20.174713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:20.204713   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:20.204791   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:20.282989   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:20.283026   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:20.327361   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:20.327436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:20.427993   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:20.428032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:20.442295   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:20.442326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:20.471477   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:20.471510   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.025659   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:23.036724   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:23.036796   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:23.064245   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:23.064269   92925 cri.go:89] found id: ""
	I1213 19:11:23.064281   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:23.064341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.068194   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:23.068269   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:23.097592   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:23.097616   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:23.097622   92925 cri.go:89] found id: ""
	I1213 19:11:23.097629   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:23.097692   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.104525   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.110378   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:23.110459   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:23.144932   92925 cri.go:89] found id: ""
	I1213 19:11:23.144958   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.144966   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:23.144972   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:23.145063   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:23.177104   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.177129   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:23.177134   92925 cri.go:89] found id: ""
	I1213 19:11:23.177142   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:23.177197   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.181178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.185904   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:23.185988   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:23.213662   92925 cri.go:89] found id: ""
	I1213 19:11:23.213740   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.213765   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:23.213784   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:23.213891   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:23.244233   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:23.244298   92925 cri.go:89] found id: ""
	I1213 19:11:23.244322   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:23.244413   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.248148   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:23.248228   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:23.276740   92925 cri.go:89] found id: ""
	I1213 19:11:23.276765   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.276773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:23.276784   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:23.276796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.336420   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:23.336453   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:23.368543   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:23.368572   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:23.450730   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:23.450772   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:23.483510   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:23.483550   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:23.628675   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:23.619033    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.620672    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.621438    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623126    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623775    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:23.619033    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.620672    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.621438    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623126    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623775    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:23.628699   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:23.628713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:23.665846   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:23.665882   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:23.713922   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:23.713959   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:23.752354   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:23.752384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:23.858109   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:23.858150   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:23.871373   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:23.871404   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.419535   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:26.430634   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:26.430705   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:26.458628   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:26.458650   92925 cri.go:89] found id: ""
	I1213 19:11:26.458661   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:26.458716   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.462422   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:26.462495   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:26.490349   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.490389   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:26.490394   92925 cri.go:89] found id: ""
	I1213 19:11:26.490401   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:26.490468   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.494405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.498636   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:26.498716   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:26.528607   92925 cri.go:89] found id: ""
	I1213 19:11:26.528637   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.528646   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:26.528653   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:26.528722   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:26.558710   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:26.558733   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:26.558741   92925 cri.go:89] found id: ""
	I1213 19:11:26.558748   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:26.558825   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.562803   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.566707   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:26.566808   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:26.596729   92925 cri.go:89] found id: ""
	I1213 19:11:26.596754   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.596763   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:26.596769   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:26.596826   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:26.624054   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:26.624077   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:26.624083   92925 cri.go:89] found id: ""
	I1213 19:11:26.624090   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:26.624167   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.628449   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.632716   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:26.632822   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:26.659170   92925 cri.go:89] found id: ""
	I1213 19:11:26.659195   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.659204   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:26.659213   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:26.659226   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:26.694272   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:26.694300   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:26.720924   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:26.720959   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:26.751980   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:26.752009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:26.824509   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:26.824547   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:26.855705   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:26.855733   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:26.867403   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:26.867431   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.906787   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:26.906823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:26.951319   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:26.951351   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:27.006541   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:27.006579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:27.033554   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:27.033583   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:27.135230   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:27.135266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:27.210106   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:27.201700    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.202413    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.203893    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.204311    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.205969    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:27.201700    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.202413    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.203893    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.204311    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.205969    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:29.711829   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:29.723531   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:29.723601   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:29.753961   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:29.753984   92925 cri.go:89] found id: ""
	I1213 19:11:29.753992   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:29.754050   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.757806   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:29.757873   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:29.783149   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:29.783181   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:29.783186   92925 cri.go:89] found id: ""
	I1213 19:11:29.783194   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:29.783263   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.787082   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.790979   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:29.791109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:29.817959   92925 cri.go:89] found id: ""
	I1213 19:11:29.817985   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.817994   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:29.818000   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:29.818060   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:29.846235   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:29.846257   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:29.846262   92925 cri.go:89] found id: ""
	I1213 19:11:29.846270   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:29.846351   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.849953   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.853572   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:29.853692   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:29.879800   92925 cri.go:89] found id: ""
	I1213 19:11:29.879834   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.879843   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:29.879850   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:29.879915   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:29.907082   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:29.907116   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:29.907121   92925 cri.go:89] found id: ""
	I1213 19:11:29.907128   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:29.907192   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.910914   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.914566   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:29.914651   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:29.939124   92925 cri.go:89] found id: ""
	I1213 19:11:29.939149   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.939158   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:29.939168   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:29.939205   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:29.981605   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:29.981639   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:30.089079   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:30.089116   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:30.156090   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:30.156124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:30.186549   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:30.186580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:30.214921   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:30.214950   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:30.242668   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:30.242697   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:30.319413   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:30.319445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:30.419178   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:30.419215   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:30.431724   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:30.431753   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:30.501053   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:30.492849    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.493577    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495362    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495976    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.497562    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:30.492849    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.493577    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495362    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495976    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.497562    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:30.501078   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:30.501092   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:30.532550   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:30.532577   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:33.076374   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:33.087831   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:33.087899   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:33.126218   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:33.126241   92925 cri.go:89] found id: ""
	I1213 19:11:33.126251   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:33.126315   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.130647   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:33.130731   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:33.158982   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:33.159013   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:33.159020   92925 cri.go:89] found id: ""
	I1213 19:11:33.159028   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:33.159094   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.162984   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.166562   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:33.166635   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:33.193330   92925 cri.go:89] found id: ""
	I1213 19:11:33.193353   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.193361   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:33.193367   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:33.193423   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:33.221129   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:33.221153   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:33.221159   92925 cri.go:89] found id: ""
	I1213 19:11:33.221166   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:33.221239   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.225797   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.229503   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:33.229615   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:33.257761   92925 cri.go:89] found id: ""
	I1213 19:11:33.257786   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.257795   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:33.257802   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:33.257865   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:33.285915   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:33.285941   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:33.285957   92925 cri.go:89] found id: ""
	I1213 19:11:33.285968   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:33.286026   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.289819   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.293581   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:33.293655   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:33.324324   92925 cri.go:89] found id: ""
	I1213 19:11:33.324348   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.324357   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:33.324366   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:33.324377   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:33.350842   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:33.350913   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:33.424344   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:33.424380   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:33.452897   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:33.452930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:33.504468   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:33.504506   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:33.579150   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:33.579183   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:33.607049   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:33.607076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:33.633297   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:33.633326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:33.668670   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:33.668699   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:33.766904   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:33.766936   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:33.780538   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:33.780567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:33.857253   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:33.848822    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.849778    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851312    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851759    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.853392    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:33.848822    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.849778    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851312    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851759    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.853392    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:33.857275   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:33.857290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.398970   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:36.410341   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:36.410416   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:36.438456   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:36.438479   92925 cri.go:89] found id: ""
	I1213 19:11:36.438488   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:36.438568   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.442320   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:36.442395   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:36.470092   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.470116   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:36.470121   92925 cri.go:89] found id: ""
	I1213 19:11:36.470131   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:36.470218   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.474021   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.477467   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:36.477578   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:36.505647   92925 cri.go:89] found id: ""
	I1213 19:11:36.505670   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.505714   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:36.505733   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:36.505804   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:36.537872   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:36.537895   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:36.537900   92925 cri.go:89] found id: ""
	I1213 19:11:36.537907   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:36.537961   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.541660   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.545244   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:36.545314   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:36.570195   92925 cri.go:89] found id: ""
	I1213 19:11:36.570228   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.570238   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:36.570250   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:36.570339   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:36.595894   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:36.595958   92925 cri.go:89] found id: ""
	I1213 19:11:36.595979   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:36.596064   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.599675   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:36.599789   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:36.624988   92925 cri.go:89] found id: ""
	I1213 19:11:36.625083   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.625101   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:36.625112   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:36.625123   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:36.718891   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:36.718924   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:36.786494   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:36.778476    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.779141    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.780744    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.781242    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.782695    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:36.778476    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.779141    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.780744    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.781242    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.782695    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:36.786519   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:36.786531   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.828295   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:36.828328   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:36.871560   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:36.871591   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:36.941295   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:36.941335   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:37.023869   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:37.023902   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:37.055672   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:37.055700   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:37.069301   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:37.069334   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:37.098989   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:37.099015   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:37.135738   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:37.135771   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:39.664114   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:39.675928   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:39.675999   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:39.702971   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:39.702989   92925 cri.go:89] found id: ""
	I1213 19:11:39.702998   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:39.703053   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.707021   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:39.707096   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:39.733615   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:39.733637   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:39.733642   92925 cri.go:89] found id: ""
	I1213 19:11:39.733663   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:39.733720   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.737520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.740992   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:39.741107   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:39.769090   92925 cri.go:89] found id: ""
	I1213 19:11:39.769174   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.769194   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:39.769201   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:39.769351   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:39.804293   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:39.804314   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:39.804319   92925 cri.go:89] found id: ""
	I1213 19:11:39.804326   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:39.804389   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.808495   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.812181   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:39.812255   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:39.838217   92925 cri.go:89] found id: ""
	I1213 19:11:39.838243   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.838252   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:39.838259   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:39.838314   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:39.866484   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:39.866504   92925 cri.go:89] found id: ""
	I1213 19:11:39.866512   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:39.866567   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.870814   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:39.870885   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:39.908207   92925 cri.go:89] found id: ""
	I1213 19:11:39.908233   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.908243   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:39.908252   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:39.908264   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:39.920472   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:39.920499   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:39.948910   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:39.948951   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:40.012782   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:40.012825   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:40.047267   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:40.047297   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:40.129790   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:40.129871   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:40.168487   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:40.168519   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:40.269381   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:40.269456   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:40.338885   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:40.330165    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.330955    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333137    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333832    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.335154    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:40.330165    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.330955    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333137    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333832    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.335154    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:40.338906   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:40.338919   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:40.394986   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:40.395024   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:40.460751   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:40.460799   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:42.992519   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:43.004031   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:43.004110   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:43.032556   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:43.032578   92925 cri.go:89] found id: ""
	I1213 19:11:43.032586   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:43.032640   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.036332   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:43.036401   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:43.065252   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:43.065282   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:43.065288   92925 cri.go:89] found id: ""
	I1213 19:11:43.065296   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:43.065358   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.070007   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.074047   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:43.074122   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:43.108141   92925 cri.go:89] found id: ""
	I1213 19:11:43.108169   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.108181   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:43.108188   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:43.108248   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:43.139539   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:43.139560   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:43.139566   92925 cri.go:89] found id: ""
	I1213 19:11:43.139574   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:43.139629   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.143534   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.147218   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:43.147292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:43.175751   92925 cri.go:89] found id: ""
	I1213 19:11:43.175825   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.175849   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:43.175868   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:43.175952   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:43.200994   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:43.201062   92925 cri.go:89] found id: ""
	I1213 19:11:43.201072   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:43.201127   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.204988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:43.205128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:43.231895   92925 cri.go:89] found id: ""
	I1213 19:11:43.231922   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.231946   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:43.231955   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:43.231968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:43.272192   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:43.272228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:43.334615   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:43.334650   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:43.366125   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:43.366153   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:43.397225   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:43.397254   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:43.468828   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:43.460439    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.461076    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.462731    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.463290    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.464964    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:43.460439    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.461076    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.462731    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.463290    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.464964    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:43.468856   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:43.468869   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:43.519337   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:43.519376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:43.552934   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:43.552963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:43.636492   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:43.636526   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:43.735496   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:43.735529   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:43.748666   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:43.748693   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:46.276009   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:46.287459   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:46.287539   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:46.315787   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:46.315809   92925 cri.go:89] found id: ""
	I1213 19:11:46.315817   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:46.315881   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.319776   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:46.319870   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:46.349638   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:46.349701   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:46.349721   92925 cri.go:89] found id: ""
	I1213 19:11:46.349737   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:46.349810   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.353770   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.357319   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:46.357391   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:46.387852   92925 cri.go:89] found id: ""
	I1213 19:11:46.387879   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.387888   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:46.387895   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:46.387956   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:46.415327   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:46.415351   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:46.415362   92925 cri.go:89] found id: ""
	I1213 19:11:46.415369   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:46.415425   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.420351   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.423877   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:46.423945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:46.452445   92925 cri.go:89] found id: ""
	I1213 19:11:46.452471   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.452480   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:46.452487   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:46.452543   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:46.488306   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:46.488329   92925 cri.go:89] found id: ""
	I1213 19:11:46.488337   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:46.488393   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.492372   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:46.492477   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:46.531601   92925 cri.go:89] found id: ""
	I1213 19:11:46.531625   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.531635   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:46.531644   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:46.531656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:46.576619   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:46.576653   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:46.637968   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:46.638005   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:46.666074   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:46.666103   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:46.699911   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:46.699988   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:46.741837   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:46.741889   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:46.771703   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:46.771729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:46.848202   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:46.848240   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:46.949628   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:46.949664   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:46.963040   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:46.963071   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:47.045784   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:47.037108    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.038507    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.039621    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.040561    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.042097    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:47.037108    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.038507    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.039621    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.040561    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.042097    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:47.045805   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:47.045818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.573745   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:49.584944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:49.585049   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:49.612421   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.612440   92925 cri.go:89] found id: ""
	I1213 19:11:49.612448   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:49.612503   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.616771   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:49.616842   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:49.644250   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:49.644313   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:49.644342   92925 cri.go:89] found id: ""
	I1213 19:11:49.644365   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:49.644448   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.648357   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.652087   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:49.652211   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:49.678765   92925 cri.go:89] found id: ""
	I1213 19:11:49.678790   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.678798   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:49.678804   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:49.678882   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:49.707013   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:49.707082   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:49.707102   92925 cri.go:89] found id: ""
	I1213 19:11:49.707128   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:49.707219   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.711513   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.715226   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:49.715321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:49.741306   92925 cri.go:89] found id: ""
	I1213 19:11:49.741375   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.741401   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:49.741421   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:49.741505   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:49.768427   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:49.768451   92925 cri.go:89] found id: ""
	I1213 19:11:49.768459   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:49.768517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.772356   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:49.772478   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:49.801564   92925 cri.go:89] found id: ""
	I1213 19:11:49.801633   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.801659   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:49.801687   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:49.801725   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.827233   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:49.827261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:49.884809   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:49.884846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:49.911980   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:49.912011   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:49.938143   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:49.938174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:49.951851   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:49.951880   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:49.992816   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:49.992861   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:50.064112   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:50.064149   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:50.149808   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:50.149847   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:50.182876   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:50.182907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:50.285831   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:50.285868   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:50.357682   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:50.350098    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.350586    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.351793    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.352420    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.354169    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:50.350098    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.350586    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.351793    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.352420    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.354169    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:52.858319   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:52.869473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:52.869548   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:52.897144   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:52.897169   92925 cri.go:89] found id: ""
	I1213 19:11:52.897177   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:52.897234   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.900973   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:52.901074   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:52.928815   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:52.928842   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:52.928847   92925 cri.go:89] found id: ""
	I1213 19:11:52.928855   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:52.928912   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.932785   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.936853   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:52.936928   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:52.963913   92925 cri.go:89] found id: ""
	I1213 19:11:52.963940   92925 logs.go:282] 0 containers: []
	W1213 19:11:52.963949   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:52.963954   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:52.964018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:52.993621   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:52.993685   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:52.993705   92925 cri.go:89] found id: ""
	I1213 19:11:52.993730   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:52.993820   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.997612   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:53.001214   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:53.001293   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:53.032707   92925 cri.go:89] found id: ""
	I1213 19:11:53.032733   92925 logs.go:282] 0 containers: []
	W1213 19:11:53.032742   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:53.032749   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:53.032812   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:53.059757   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:53.059780   92925 cri.go:89] found id: ""
	I1213 19:11:53.059805   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:53.059860   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:53.063600   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:53.063673   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:53.091179   92925 cri.go:89] found id: ""
	I1213 19:11:53.091248   92925 logs.go:282] 0 containers: []
	W1213 19:11:53.091286   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:53.091303   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:53.091316   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:53.123301   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:53.123391   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:53.196598   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:53.196634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:53.227689   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:53.227715   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:53.327870   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:53.327905   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:53.343261   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:53.343290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:53.371058   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:53.371089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:53.418862   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:53.418896   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:53.475787   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:53.475822   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:53.507061   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:53.507090   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:53.584040   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:53.575651    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.576367    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.577874    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.578518    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.580190    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:53.575651    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.576367    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.577874    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.578518    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.580190    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:53.584063   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:53.584076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.124239   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:56.136746   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:56.136818   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:56.165417   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:56.165442   92925 cri.go:89] found id: ""
	I1213 19:11:56.165451   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:56.165513   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.169272   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:56.169348   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:56.198281   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.198304   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:56.198309   92925 cri.go:89] found id: ""
	I1213 19:11:56.198316   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:56.198370   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.202310   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.206597   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:56.206670   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:56.233152   92925 cri.go:89] found id: ""
	I1213 19:11:56.233179   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.233189   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:56.233195   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:56.233259   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:56.263980   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:56.264000   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:56.264005   92925 cri.go:89] found id: ""
	I1213 19:11:56.264013   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:56.264071   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.268409   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.272169   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:56.272245   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:56.307136   92925 cri.go:89] found id: ""
	I1213 19:11:56.307163   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.307173   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:56.307179   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:56.307237   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:56.335595   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:56.335618   92925 cri.go:89] found id: ""
	I1213 19:11:56.335626   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:56.335684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.339317   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:56.339388   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:56.365740   92925 cri.go:89] found id: ""
	I1213 19:11:56.365763   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.365773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:56.365782   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:56.365795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:56.392684   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:56.392715   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.443884   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:56.443916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:56.470931   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:56.471007   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:56.498493   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:56.498569   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:56.594275   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:56.594325   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:56.697865   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:56.697902   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:56.710803   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:56.710833   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:56.774588   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:56.766250    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.767127    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.768759    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.769116    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.770766    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:56.766250    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.767127    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.768759    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.769116    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.770766    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:56.774608   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:56.774621   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:56.822318   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:56.822354   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:56.879404   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:56.879440   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:59.418085   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:59.429523   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:59.429599   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:59.459140   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:59.459164   92925 cri.go:89] found id: ""
	I1213 19:11:59.459173   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:59.459250   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.463131   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:59.463231   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:59.491515   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:59.491539   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:59.491544   92925 cri.go:89] found id: ""
	I1213 19:11:59.491552   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:59.491650   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.495555   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.499043   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:59.499118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:59.542670   92925 cri.go:89] found id: ""
	I1213 19:11:59.542745   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.542771   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:59.542785   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:59.542861   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:59.569926   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:59.569950   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:59.569954   92925 cri.go:89] found id: ""
	I1213 19:11:59.569962   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:59.570030   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.574242   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.578071   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:59.578177   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:59.610686   92925 cri.go:89] found id: ""
	I1213 19:11:59.610714   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.610723   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:59.610729   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:59.610789   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:59.639587   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:59.639641   92925 cri.go:89] found id: ""
	I1213 19:11:59.639659   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:59.639720   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.644316   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:59.644404   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:59.672619   92925 cri.go:89] found id: ""
	I1213 19:11:59.672644   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.672653   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:59.672663   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:59.672684   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:59.700144   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:59.700172   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:59.777808   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:59.777856   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:59.811078   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:59.811111   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:59.910789   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:59.910827   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:59.987053   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:59.975650    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.976469    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.977682    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.978310    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.979849    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:59.975650    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.976469    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.977682    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.978310    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.979849    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:00.003642   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:00.003687   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:00.194711   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:00.194803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:00.357297   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:00.357336   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:00.438487   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:00.438580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:00.454845   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:00.454880   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:00.564592   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:00.564633   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.112543   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:03.123663   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:03.123738   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:03.157514   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:03.157538   92925 cri.go:89] found id: ""
	I1213 19:12:03.157546   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:03.157601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.161756   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:03.161829   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:03.187867   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:03.187887   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:03.187892   92925 cri.go:89] found id: ""
	I1213 19:12:03.187900   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:03.187954   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.191586   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.195089   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:03.195186   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:03.227702   92925 cri.go:89] found id: ""
	I1213 19:12:03.227727   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.227736   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:03.227742   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:03.227802   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:03.254539   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:03.254561   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.254566   92925 cri.go:89] found id: ""
	I1213 19:12:03.254574   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:03.254653   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.258434   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.262232   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:03.262309   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:03.293528   92925 cri.go:89] found id: ""
	I1213 19:12:03.293552   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.293561   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:03.293567   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:03.293627   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:03.324573   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:03.324595   92925 cri.go:89] found id: ""
	I1213 19:12:03.324603   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:03.324655   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.328400   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:03.328469   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:03.354317   92925 cri.go:89] found id: ""
	I1213 19:12:03.354342   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.354351   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:03.354362   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:03.354376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:03.416520   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:03.416559   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.443937   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:03.443966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:03.520631   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:03.520669   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:03.539545   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:03.539575   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:03.609658   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:03.599495    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.600262    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.602170    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604093    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604836    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:03.599495    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.600262    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.602170    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604093    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604836    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:03.609679   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:03.609691   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:03.641994   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:03.642021   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:03.683262   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:03.683296   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:03.711455   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:03.711486   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:03.742963   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:03.742994   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:03.842936   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:03.842971   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.387950   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:06.398757   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:06.398838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:06.427281   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:06.427343   92925 cri.go:89] found id: ""
	I1213 19:12:06.427359   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:06.427424   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.431296   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:06.431370   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:06.458047   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:06.458069   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.458073   92925 cri.go:89] found id: ""
	I1213 19:12:06.458081   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:06.458138   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.461822   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.466010   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:06.466084   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:06.504515   92925 cri.go:89] found id: ""
	I1213 19:12:06.504542   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.504551   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:06.504560   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:06.504621   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:06.541478   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:06.541501   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:06.541506   92925 cri.go:89] found id: ""
	I1213 19:12:06.541514   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:06.541576   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.545645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.549634   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:06.549704   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:06.576630   92925 cri.go:89] found id: ""
	I1213 19:12:06.576698   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.576724   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:06.576744   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:06.576832   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:06.604207   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:06.604229   92925 cri.go:89] found id: ""
	I1213 19:12:06.604237   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:06.604298   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.608117   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:06.608232   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:06.634291   92925 cri.go:89] found id: ""
	I1213 19:12:06.634362   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.634379   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:06.634388   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:06.634402   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.696997   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:06.697085   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:06.756705   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:06.756741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:06.836493   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:06.836525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:06.936663   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:06.936700   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:06.949180   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:06.949212   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:07.020703   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:07.012352    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.013247    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.014825    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.015260    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.016747    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:07.012352    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.013247    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.014825    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.015260    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.016747    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:07.020728   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:07.020741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:07.052354   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:07.052383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:07.079834   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:07.079865   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:07.119690   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:07.119720   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:07.146357   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:07.146385   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:09.686883   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:09.697849   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:09.697924   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:09.724282   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:09.724307   92925 cri.go:89] found id: ""
	I1213 19:12:09.724316   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:09.724374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.727853   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:09.727929   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:09.757294   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:09.757315   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:09.757320   92925 cri.go:89] found id: ""
	I1213 19:12:09.757328   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:09.757383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.761291   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.764680   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:09.764755   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:09.791939   92925 cri.go:89] found id: ""
	I1213 19:12:09.791964   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.791974   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:09.791979   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:09.792059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:09.819349   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:09.819415   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:09.819435   92925 cri.go:89] found id: ""
	I1213 19:12:09.819460   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:09.819540   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.823580   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.827023   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:09.827138   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:09.857888   92925 cri.go:89] found id: ""
	I1213 19:12:09.857966   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.857990   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:09.858001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:09.858066   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:09.884350   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:09.884373   92925 cri.go:89] found id: ""
	I1213 19:12:09.884381   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:09.884438   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.888641   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:09.888720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:09.915592   92925 cri.go:89] found id: ""
	I1213 19:12:09.915614   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.915623   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:09.915632   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:09.915644   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:09.941582   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:09.941614   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:10.002342   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:10.002377   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:10.031301   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:10.031336   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:10.071296   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:10.071332   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:10.123567   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:10.123605   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:10.157428   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:10.157457   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:10.238347   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:10.238426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:10.334563   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:10.334598   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:10.347255   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:10.347286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:10.432160   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:10.423156    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.423973    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.425617    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.426254    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.428070    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:10.423156    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.423973    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.425617    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.426254    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.428070    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:10.432226   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:10.432252   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:12.994728   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:13.005943   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:13.006017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:13.033581   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:13.033602   92925 cri.go:89] found id: ""
	I1213 19:12:13.033610   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:13.033689   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.037439   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:13.037531   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:13.069482   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:13.069506   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:13.069511   92925 cri.go:89] found id: ""
	I1213 19:12:13.069520   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:13.069579   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.073384   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.077179   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:13.077250   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:13.117434   92925 cri.go:89] found id: ""
	I1213 19:12:13.117508   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.117525   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:13.117532   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:13.117603   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:13.151113   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:13.151191   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:13.151211   92925 cri.go:89] found id: ""
	I1213 19:12:13.151235   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:13.151330   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.155305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.159267   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:13.159375   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:13.193156   92925 cri.go:89] found id: ""
	I1213 19:12:13.193183   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.193191   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:13.193197   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:13.193303   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:13.228192   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:13.228272   92925 cri.go:89] found id: ""
	I1213 19:12:13.228304   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:13.228385   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.232149   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:13.232270   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:13.265793   92925 cri.go:89] found id: ""
	I1213 19:12:13.265868   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.265892   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:13.265914   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:13.265974   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:13.298247   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:13.298332   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:13.338944   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:13.338977   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:13.398561   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:13.398600   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:13.426862   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:13.426891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:13.526771   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:13.526807   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:13.539556   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:13.539587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:13.606738   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:13.598805    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.599569    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.600660    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.601348    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.602977    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:13.598805    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.599569    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.600660    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.601348    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.602977    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:13.606761   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:13.606777   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:13.632299   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:13.632367   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:13.681186   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:13.681224   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:13.715711   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:13.715741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:16.289974   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:16.301720   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:16.301794   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:16.333180   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:16.333203   92925 cri.go:89] found id: ""
	I1213 19:12:16.333211   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:16.333262   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.337163   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:16.337233   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:16.366808   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:16.366829   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:16.366834   92925 cri.go:89] found id: ""
	I1213 19:12:16.366841   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:16.366897   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.370643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.374381   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:16.374453   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:16.402639   92925 cri.go:89] found id: ""
	I1213 19:12:16.402663   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.402672   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:16.402678   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:16.402735   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:16.429862   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:16.429927   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:16.429948   92925 cri.go:89] found id: ""
	I1213 19:12:16.429971   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:16.430057   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.437586   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.443620   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:16.443739   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:16.468889   92925 cri.go:89] found id: ""
	I1213 19:12:16.468915   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.468933   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:16.468940   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:16.469002   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:16.497884   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:16.497952   92925 cri.go:89] found id: ""
	I1213 19:12:16.497975   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:16.498065   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.501907   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:16.502017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:16.528833   92925 cri.go:89] found id: ""
	I1213 19:12:16.528861   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.528871   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:16.528880   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:16.528891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:16.571970   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:16.572003   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:16.599399   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:16.599433   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:16.626668   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:16.626698   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:16.657476   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:16.657505   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:16.756171   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:16.756207   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:16.768558   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:16.768587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:16.841002   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:16.841041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:16.913877   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:16.913951   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:17.002296   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:16.981549    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.983800    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.984559    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.987461    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.988234    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:16.981549    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.983800    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.984559    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.987461    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.988234    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:17.002364   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:17.002385   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:17.029940   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:17.029968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.576739   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:19.587975   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:19.588041   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:19.614817   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:19.614840   92925 cri.go:89] found id: ""
	I1213 19:12:19.614848   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:19.614903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.618582   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:19.618679   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:19.651398   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.651419   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:19.651424   92925 cri.go:89] found id: ""
	I1213 19:12:19.651432   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:19.651501   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.655392   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.659059   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:19.659134   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:19.684221   92925 cri.go:89] found id: ""
	I1213 19:12:19.684247   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.684257   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:19.684264   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:19.684323   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:19.711198   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:19.711220   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:19.711226   92925 cri.go:89] found id: ""
	I1213 19:12:19.711233   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:19.711289   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.715680   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.719221   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:19.719292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:19.751237   92925 cri.go:89] found id: ""
	I1213 19:12:19.751286   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.751296   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:19.751303   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:19.751371   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:19.778300   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:19.778321   92925 cri.go:89] found id: ""
	I1213 19:12:19.778330   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:19.778413   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.782520   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:19.782614   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:19.814477   92925 cri.go:89] found id: ""
	I1213 19:12:19.814507   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.814517   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:19.814526   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:19.814558   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.855891   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:19.855922   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:19.917648   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:19.917687   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:19.949548   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:19.949574   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:19.976644   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:19.976680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:20.064988   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:20.065042   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:20.114742   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:20.114776   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:20.220028   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:20.220066   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:20.232673   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:20.232703   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:20.314099   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:20.305597    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.306343    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308133    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308739    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.310382    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:20.305597    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.306343    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308133    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308739    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.310382    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:20.314125   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:20.314142   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:20.358618   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:20.358649   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:22.884692   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:22.896642   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:22.896714   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:22.925894   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:22.925919   92925 cri.go:89] found id: ""
	I1213 19:12:22.925928   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:22.925982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.929556   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:22.929630   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:22.957310   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:22.957375   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:22.957393   92925 cri.go:89] found id: ""
	I1213 19:12:22.957419   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:22.957496   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.961230   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.964927   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:22.965122   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:22.993901   92925 cri.go:89] found id: ""
	I1213 19:12:22.993974   92925 logs.go:282] 0 containers: []
	W1213 19:12:22.994000   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:22.994012   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:22.994092   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:23.021087   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:23.021112   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:23.021117   92925 cri.go:89] found id: ""
	I1213 19:12:23.021123   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:23.021179   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.025414   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.029044   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:23.029147   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:23.054815   92925 cri.go:89] found id: ""
	I1213 19:12:23.054840   92925 logs.go:282] 0 containers: []
	W1213 19:12:23.054848   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:23.054855   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:23.054913   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:23.080286   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:23.080312   92925 cri.go:89] found id: ""
	I1213 19:12:23.080320   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:23.080407   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.084274   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:23.084375   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:23.115727   92925 cri.go:89] found id: ""
	I1213 19:12:23.115750   92925 logs.go:282] 0 containers: []
	W1213 19:12:23.115758   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:23.115767   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:23.115796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:23.194830   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:23.186405    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.187281    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.188756    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.189379    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.191250    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:23.186405    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.187281    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.188756    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.189379    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.191250    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:23.194890   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:23.194911   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:23.234766   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:23.234801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:23.282930   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:23.282966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:23.352028   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:23.352067   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:23.379340   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:23.379418   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:23.425558   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:23.425589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:23.453170   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:23.453198   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:23.484993   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:23.485089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:23.575060   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:23.575093   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:23.676623   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:23.676658   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:26.191200   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:26.202087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:26.202208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:26.237575   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:26.237607   92925 cri.go:89] found id: ""
	I1213 19:12:26.237616   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:26.237685   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.242604   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:26.242726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:26.275657   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:26.275680   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:26.275687   92925 cri.go:89] found id: ""
	I1213 19:12:26.275696   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:26.275774   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.279747   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.283677   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:26.283784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:26.312109   92925 cri.go:89] found id: ""
	I1213 19:12:26.312185   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.312219   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:26.312239   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:26.312329   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:26.342409   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:26.342432   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:26.342437   92925 cri.go:89] found id: ""
	I1213 19:12:26.342445   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:26.342500   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.346485   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.350281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:26.350365   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:26.375751   92925 cri.go:89] found id: ""
	I1213 19:12:26.375775   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.375783   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:26.375790   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:26.375864   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:26.401584   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:26.401607   92925 cri.go:89] found id: ""
	I1213 19:12:26.401614   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:26.401686   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.405294   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:26.405373   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:26.433390   92925 cri.go:89] found id: ""
	I1213 19:12:26.433467   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.433491   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:26.433507   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:26.433533   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:26.493265   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:26.493305   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:26.528279   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:26.528307   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:26.612530   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:26.612565   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:26.625201   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:26.625231   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:26.695921   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:26.686948    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.687827    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.689491    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.690111    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.691852    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:26.686948    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.687827    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.689491    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.690111    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.691852    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:26.695942   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:26.695955   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:26.721367   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:26.721436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:26.747790   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:26.747818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:26.778783   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:26.778813   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:26.875307   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:26.875341   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:26.926065   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:26.926104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.471412   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:29.482208   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:29.482279   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:29.518089   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:29.518111   92925 cri.go:89] found id: ""
	I1213 19:12:29.518120   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:29.518179   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.522151   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:29.522316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:29.550522   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:29.550548   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.550553   92925 cri.go:89] found id: ""
	I1213 19:12:29.550561   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:29.550614   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.554476   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.557855   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:29.557927   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:29.585314   92925 cri.go:89] found id: ""
	I1213 19:12:29.585337   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.585346   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:29.585352   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:29.585415   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:29.613061   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:29.613081   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:29.613087   92925 cri.go:89] found id: ""
	I1213 19:12:29.613094   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:29.613149   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.617383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.621127   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:29.621198   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:29.648388   92925 cri.go:89] found id: ""
	I1213 19:12:29.648415   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.648425   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:29.648434   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:29.648493   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:29.675800   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:29.675823   92925 cri.go:89] found id: ""
	I1213 19:12:29.675832   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:29.675885   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.679891   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:29.679964   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:29.708415   92925 cri.go:89] found id: ""
	I1213 19:12:29.708439   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.708447   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:29.708457   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:29.708469   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:29.747281   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:29.747357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.791340   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:29.791374   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:29.834406   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:29.834436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:29.861132   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:29.861162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:29.962754   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:29.962831   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:29.975698   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:29.975725   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:30.136167   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:30.136206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:30.219391   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:30.219426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:30.250060   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:30.250090   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:30.324085   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:30.315913    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.316779    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318083    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318787    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.320486    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:30.315913    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.316779    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318083    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318787    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.320486    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:30.324108   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:30.324122   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:32.849129   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:32.861076   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:32.861146   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:32.890816   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:32.890837   92925 cri.go:89] found id: ""
	I1213 19:12:32.890845   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:32.890899   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.894607   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:32.894684   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:32.925830   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:32.925856   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:32.925861   92925 cri.go:89] found id: ""
	I1213 19:12:32.925868   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:32.925921   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.929582   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.932913   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:32.932983   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:32.959171   92925 cri.go:89] found id: ""
	I1213 19:12:32.959199   92925 logs.go:282] 0 containers: []
	W1213 19:12:32.959208   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:32.959214   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:32.959319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:32.993282   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:32.993309   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:32.993315   92925 cri.go:89] found id: ""
	I1213 19:12:32.993331   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:32.993393   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.997923   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:33.002009   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:33.002111   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:33.029187   92925 cri.go:89] found id: ""
	I1213 19:12:33.029210   92925 logs.go:282] 0 containers: []
	W1213 19:12:33.029219   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:33.029225   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:33.029333   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:33.057252   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:33.057287   92925 cri.go:89] found id: ""
	I1213 19:12:33.057296   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:33.057360   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:33.061234   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:33.061340   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:33.089861   92925 cri.go:89] found id: ""
	I1213 19:12:33.089889   92925 logs.go:282] 0 containers: []
	W1213 19:12:33.089898   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:33.089907   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:33.089919   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:33.108679   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:33.108710   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:33.162722   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:33.162768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:33.227823   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:33.227861   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:33.260183   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:33.260210   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:33.286847   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:33.286872   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:33.368228   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:33.368263   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:33.475747   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:33.475786   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:33.554192   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:33.546124    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.546992    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.548557    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.549128    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.550628    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:33.546124    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.546992    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.548557    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.549128    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.550628    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:33.554212   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:33.554225   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:33.579823   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:33.579850   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:33.623777   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:33.623815   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:36.157314   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:36.168502   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:36.168576   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:36.196421   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:36.196442   92925 cri.go:89] found id: ""
	I1213 19:12:36.196451   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:36.196511   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.200568   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:36.200636   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:36.227300   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:36.227324   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:36.227331   92925 cri.go:89] found id: ""
	I1213 19:12:36.227338   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:36.227396   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.231459   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.235239   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:36.235316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:36.268611   92925 cri.go:89] found id: ""
	I1213 19:12:36.268635   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.268644   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:36.268650   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:36.268731   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:36.308479   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:36.308576   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:36.308597   92925 cri.go:89] found id: ""
	I1213 19:12:36.308642   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:36.308738   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.312547   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.316077   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:36.316189   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:36.342346   92925 cri.go:89] found id: ""
	I1213 19:12:36.342382   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.342392   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:36.342414   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:36.342496   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:36.368808   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:36.368834   92925 cri.go:89] found id: ""
	I1213 19:12:36.368844   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:36.368899   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.372705   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:36.372790   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:36.399760   92925 cri.go:89] found id: ""
	I1213 19:12:36.399796   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.399805   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:36.399817   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:36.399829   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:36.497016   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:36.497097   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:36.511432   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:36.511552   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:36.587222   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:36.577960    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.578711    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.580805    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.581572    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.583427    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:36.577960    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.578711    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.580805    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.581572    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.583427    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:36.587247   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:36.587262   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:36.630739   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:36.630774   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:36.683440   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:36.683473   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:36.751190   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:36.751241   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:36.779744   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:36.779833   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:36.806180   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:36.806206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:36.832449   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:36.832475   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:36.910859   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:36.910900   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:39.441151   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:39.452365   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:39.452439   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:39.484411   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:39.484436   92925 cri.go:89] found id: ""
	I1213 19:12:39.484444   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:39.484499   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.488316   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:39.488390   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:39.519236   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:39.519263   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:39.519268   92925 cri.go:89] found id: ""
	I1213 19:12:39.519277   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:39.519331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.523340   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.529308   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:39.529377   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:39.559339   92925 cri.go:89] found id: ""
	I1213 19:12:39.559405   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.559437   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:39.559456   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:39.559543   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:39.589737   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:39.589769   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:39.589775   92925 cri.go:89] found id: ""
	I1213 19:12:39.589783   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:39.589848   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.593976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.598330   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:39.598421   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:39.631670   92925 cri.go:89] found id: ""
	I1213 19:12:39.631699   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.631708   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:39.631714   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:39.631783   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:39.662738   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:39.662803   92925 cri.go:89] found id: ""
	I1213 19:12:39.662824   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:39.662906   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.666773   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:39.666867   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:39.695600   92925 cri.go:89] found id: ""
	I1213 19:12:39.695627   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.695637   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:39.695646   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:39.695658   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:39.787866   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:39.787904   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:39.864556   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:39.853140    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.856488    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.857226    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.858708    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.859314    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:39.853140    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.856488    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.857226    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.858708    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.859314    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:39.864580   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:39.864594   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:39.893552   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:39.893593   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:39.935040   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:39.935070   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:39.977962   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:39.977992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:40.052674   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:40.052713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:40.145597   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:40.145709   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:40.181340   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:40.181368   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:40.194929   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:40.194999   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:40.222595   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:40.222665   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:42.749068   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:42.760019   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:42.760098   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:42.790868   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:42.790891   92925 cri.go:89] found id: ""
	I1213 19:12:42.790898   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:42.790953   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.794682   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:42.794770   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:42.823001   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:42.823024   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:42.823029   92925 cri.go:89] found id: ""
	I1213 19:12:42.823036   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:42.823102   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.826966   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.830581   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:42.830667   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:42.857298   92925 cri.go:89] found id: ""
	I1213 19:12:42.857325   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.857334   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:42.857340   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:42.857402   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:42.888499   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:42.888524   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:42.888528   92925 cri.go:89] found id: ""
	I1213 19:12:42.888535   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:42.888601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.894724   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.898823   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:42.898944   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:42.925225   92925 cri.go:89] found id: ""
	I1213 19:12:42.925262   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.925271   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:42.925277   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:42.925363   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:42.954151   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:42.954186   92925 cri.go:89] found id: ""
	I1213 19:12:42.954195   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:42.954262   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.958191   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:42.958256   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:42.997632   92925 cri.go:89] found id: ""
	I1213 19:12:42.997699   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.997722   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:42.997738   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:42.997750   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:43.044934   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:43.044968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:43.130707   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:43.130787   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:43.162064   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:43.162196   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:43.174781   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:43.174807   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:43.248282   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:43.239057    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.239785    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.241456    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.242060    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.243778    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:43.239057    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.239785    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.241456    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.242060    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.243778    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:43.248309   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:43.248322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:43.292697   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:43.292729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:43.326878   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:43.326906   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:43.402321   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:43.402356   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:43.434630   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:43.434662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:43.547901   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:43.547940   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.074896   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:46.086088   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:46.086156   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:46.138954   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.138977   92925 cri.go:89] found id: ""
	I1213 19:12:46.138985   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:46.139041   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.142934   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:46.143008   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:46.167983   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:46.168008   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:46.168014   92925 cri.go:89] found id: ""
	I1213 19:12:46.168022   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:46.168083   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.172203   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.176085   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:46.176164   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:46.206474   92925 cri.go:89] found id: ""
	I1213 19:12:46.206501   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.206509   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:46.206515   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:46.206572   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:46.232990   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:46.233047   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:46.233052   92925 cri.go:89] found id: ""
	I1213 19:12:46.233059   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:46.233121   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.236960   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.241098   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:46.241171   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:46.277846   92925 cri.go:89] found id: ""
	I1213 19:12:46.277872   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.277881   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:46.277886   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:46.277945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:46.306293   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:46.306316   92925 cri.go:89] found id: ""
	I1213 19:12:46.306324   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:46.306383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.310146   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:46.310220   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:46.337703   92925 cri.go:89] found id: ""
	I1213 19:12:46.337728   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.337737   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:46.337746   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:46.337757   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:46.433354   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:46.433391   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:46.446062   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:46.446089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.474866   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:46.474894   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:46.518894   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:46.518972   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:46.584190   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:46.584221   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:46.612728   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:46.612798   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:46.693365   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:46.693401   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:46.730005   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:46.730036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:46.805821   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:46.797250    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.797857    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799401    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799906    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.801867    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:46.797250    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.797857    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799401    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799906    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.801867    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:46.805844   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:46.805858   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:46.849142   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:46.849180   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.377325   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:49.388007   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:49.388073   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:49.414745   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:49.414768   92925 cri.go:89] found id: ""
	I1213 19:12:49.414777   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:49.414831   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.418502   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:49.418579   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:49.443751   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:49.443772   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:49.443777   92925 cri.go:89] found id: ""
	I1213 19:12:49.443784   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:49.443864   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.447524   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.450957   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:49.451025   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:49.478284   92925 cri.go:89] found id: ""
	I1213 19:12:49.478309   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.478318   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:49.478324   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:49.478383   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:49.506581   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:49.506604   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:49.506609   92925 cri.go:89] found id: ""
	I1213 19:12:49.506617   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:49.506673   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.513976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.518489   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:49.518567   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:49.545961   92925 cri.go:89] found id: ""
	I1213 19:12:49.545986   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.545995   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:49.546001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:49.546072   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:49.579946   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.579974   92925 cri.go:89] found id: ""
	I1213 19:12:49.579983   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:49.580036   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.583648   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:49.583726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:49.610201   92925 cri.go:89] found id: ""
	I1213 19:12:49.610278   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.610294   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:49.610304   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:49.610321   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:49.682958   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:49.682995   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:49.716028   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:49.716058   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:49.744220   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:49.744248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:49.783347   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:49.783379   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:49.826736   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:49.826770   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:49.860737   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:49.860767   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.894176   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:49.894206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:49.978486   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:49.978525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:50.088530   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:50.088567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:50.107858   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:50.107886   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:50.186950   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:50.178748    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.179306    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.180827    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.181343    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.182902    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:50.178748    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.179306    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.180827    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.181343    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.182902    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:52.687879   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:52.700111   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:52.700185   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:52.727611   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:52.727635   92925 cri.go:89] found id: ""
	I1213 19:12:52.727643   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:52.727699   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.732611   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:52.732683   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:52.760331   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:52.760355   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:52.760361   92925 cri.go:89] found id: ""
	I1213 19:12:52.760369   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:52.760424   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.764203   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.767807   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:52.767880   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:52.794453   92925 cri.go:89] found id: ""
	I1213 19:12:52.794528   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.794552   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:52.794571   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:52.794662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:52.824938   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:52.825046   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:52.825077   92925 cri.go:89] found id: ""
	I1213 19:12:52.825108   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:52.825170   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.828865   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.832644   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:52.832718   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:52.860489   92925 cri.go:89] found id: ""
	I1213 19:12:52.860512   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.860521   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:52.860527   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:52.860588   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:52.886828   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:52.886862   92925 cri.go:89] found id: ""
	I1213 19:12:52.886872   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:52.886940   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.890986   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:52.891106   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:52.917681   92925 cri.go:89] found id: ""
	I1213 19:12:52.917749   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.917776   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:52.917799   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:52.917837   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:52.948506   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:52.948535   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:52.977936   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:52.977963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:53.041212   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:53.041249   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:53.080162   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:53.080189   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:53.174852   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:53.174897   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:53.273766   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:53.273802   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:53.285893   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:53.285925   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:53.352966   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:53.343677    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345158    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345928    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347424    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347925    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:53.343677    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345158    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345928    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347424    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347925    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:53.352990   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:53.353032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:53.391432   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:53.391464   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:53.451329   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:53.451363   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:55.977809   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:55.993375   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:55.993492   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:56.026972   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:56.026993   92925 cri.go:89] found id: ""
	I1213 19:12:56.027001   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:56.027059   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.031128   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:56.031204   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:56.058936   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:56.058958   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:56.058963   92925 cri.go:89] found id: ""
	I1213 19:12:56.058971   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:56.059024   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.062862   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.066757   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:56.066858   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:56.096088   92925 cri.go:89] found id: ""
	I1213 19:12:56.096112   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.096121   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:56.096134   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:56.096196   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:56.138653   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:56.138678   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:56.138683   92925 cri.go:89] found id: ""
	I1213 19:12:56.138691   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:56.138748   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.142767   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.146336   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:56.146413   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:56.176996   92925 cri.go:89] found id: ""
	I1213 19:12:56.177098   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.177115   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:56.177122   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:56.177191   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:56.206318   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:56.206341   92925 cri.go:89] found id: ""
	I1213 19:12:56.206350   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:56.206405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.210085   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:56.210208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:56.240242   92925 cri.go:89] found id: ""
	I1213 19:12:56.240269   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.240278   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:56.240287   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:56.240299   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:56.268772   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:56.268800   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:56.282265   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:56.282293   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:56.334697   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:56.334731   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:56.419986   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:56.420074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:56.466391   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:56.466421   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:56.578289   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:56.578327   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:56.657266   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:56.648227    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.649364    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.650885    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.651401    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.653076    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:56.648227    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.649364    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.650885    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.651401    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.653076    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:56.657289   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:56.657302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:56.685603   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:56.685631   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:56.732451   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:56.732487   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:56.807034   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:56.807068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:59.335877   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:59.346983   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:59.347053   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:59.375213   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:59.375241   92925 cri.go:89] found id: ""
	I1213 19:12:59.375250   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:59.375308   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.379246   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:59.379319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:59.406052   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:59.406073   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:59.406078   92925 cri.go:89] found id: ""
	I1213 19:12:59.406085   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:59.406142   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.409969   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.413744   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:59.413813   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:59.440031   92925 cri.go:89] found id: ""
	I1213 19:12:59.440057   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.440066   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:59.440072   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:59.440131   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:59.470750   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:59.470770   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:59.470775   92925 cri.go:89] found id: ""
	I1213 19:12:59.470782   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:59.470836   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.474671   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.478148   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:59.478230   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:59.532301   92925 cri.go:89] found id: ""
	I1213 19:12:59.532334   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.532344   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:59.532350   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:59.532423   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:59.558719   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:59.558742   92925 cri.go:89] found id: ""
	I1213 19:12:59.558750   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:59.558814   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.562460   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:59.562534   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:59.588851   92925 cri.go:89] found id: ""
	I1213 19:12:59.588916   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.588942   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:59.588964   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:59.589031   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:59.665993   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:59.666032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:59.712805   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:59.712839   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:59.725635   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:59.725688   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:59.797796   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:59.790093    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.790845    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.791906    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.792472    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.794170    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:59.790093    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.790845    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.791906    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.792472    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.794170    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:59.797819   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:59.797831   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:59.825855   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:59.825886   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:59.864251   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:59.864286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:59.890125   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:59.890151   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:59.981337   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:59.981387   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:00.239751   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:00.239799   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:00.366187   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:00.368005   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:02.909028   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:02.919617   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:02.919732   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:02.946548   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:02.946613   92925 cri.go:89] found id: ""
	I1213 19:13:02.946629   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:02.946696   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.950448   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:02.950542   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:02.975550   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:02.975572   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:02.975577   92925 cri.go:89] found id: ""
	I1213 19:13:02.975585   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:02.975645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.979406   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.984704   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:02.984818   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:03.017288   92925 cri.go:89] found id: ""
	I1213 19:13:03.017311   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.017320   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:03.017334   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:03.017393   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:03.048824   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:03.048850   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:03.048857   92925 cri.go:89] found id: ""
	I1213 19:13:03.048864   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:03.048919   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.052630   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.056397   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:03.056521   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:03.088050   92925 cri.go:89] found id: ""
	I1213 19:13:03.088123   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.088146   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:03.088165   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:03.088271   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:03.119709   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:03.119778   92925 cri.go:89] found id: ""
	I1213 19:13:03.119801   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:03.119889   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.127122   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:03.127274   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:03.162913   92925 cri.go:89] found id: ""
	I1213 19:13:03.162936   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.162945   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:03.162953   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:03.162966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:03.207543   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:03.207579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:03.279537   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:03.279575   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:03.314034   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:03.314062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:03.394532   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:03.394567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:03.428318   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:03.428351   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:03.528148   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:03.528187   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:03.626750   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:03.618493    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.619154    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.620764    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.621367    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.622889    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:03.618493    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.619154    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.620764    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.621367    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.622889    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:03.626775   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:03.626788   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:03.685480   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:03.685519   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:03.713856   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:03.713883   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:03.734590   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:03.734620   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:06.266879   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:06.277733   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:06.277799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:06.305175   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:06.305196   92925 cri.go:89] found id: ""
	I1213 19:13:06.305204   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:06.305258   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.308850   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:06.308928   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:06.335153   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:06.335177   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:06.335182   92925 cri.go:89] found id: ""
	I1213 19:13:06.335189   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:06.335246   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.338903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.342418   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:06.342493   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:06.372604   92925 cri.go:89] found id: ""
	I1213 19:13:06.372632   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.372641   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:06.372646   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:06.372707   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:06.402642   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:06.402670   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:06.402675   92925 cri.go:89] found id: ""
	I1213 19:13:06.402682   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:06.402740   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.406787   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.411254   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:06.411335   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:06.437659   92925 cri.go:89] found id: ""
	I1213 19:13:06.437736   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.437751   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:06.437758   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:06.437829   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:06.466702   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:06.466725   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:06.466730   92925 cri.go:89] found id: ""
	I1213 19:13:06.466737   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:06.466793   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.470567   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.474150   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:06.474224   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:06.501494   92925 cri.go:89] found id: ""
	I1213 19:13:06.501569   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.501594   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:06.501617   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:06.501662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:06.544779   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:06.544813   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:06.609379   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:06.609413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:06.637668   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:06.637698   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:06.664078   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:06.664105   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:06.709192   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:06.709225   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:06.737814   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:06.737845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:06.810267   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:06.810302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:06.841843   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:06.841871   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:06.938739   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:06.938776   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:06.951386   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:06.951414   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:07.032986   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:07.025075    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.025642    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027282    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027955    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.029566    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:07.025075    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.025642    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027282    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027955    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.029566    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:07.033040   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:07.033053   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:09.558493   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:09.570604   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:09.570681   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:09.598108   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:09.598133   92925 cri.go:89] found id: ""
	I1213 19:13:09.598141   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:09.598197   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.602596   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:09.602673   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:09.629705   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:09.629727   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:09.629733   92925 cri.go:89] found id: ""
	I1213 19:13:09.629741   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:09.629798   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.634280   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.637817   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:09.637895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:09.665414   92925 cri.go:89] found id: ""
	I1213 19:13:09.665438   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.665447   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:09.665453   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:09.665509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:09.691729   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:09.691754   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:09.691759   92925 cri.go:89] found id: ""
	I1213 19:13:09.691766   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:09.691850   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.696064   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.700204   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:09.700308   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:09.732154   92925 cri.go:89] found id: ""
	I1213 19:13:09.732181   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.732190   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:09.732196   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:09.732277   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:09.760821   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:09.760844   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:09.760849   92925 cri.go:89] found id: ""
	I1213 19:13:09.760856   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:09.760918   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.764697   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.768225   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:09.768299   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:09.796678   92925 cri.go:89] found id: ""
	I1213 19:13:09.796748   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.796773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:09.796797   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:09.796844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:09.892500   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:09.892536   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:09.905527   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:09.905557   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:09.964751   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:09.964785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:10.026858   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:10.026896   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:10.095709   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:10.095747   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:10.135797   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:10.135834   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:10.207467   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:10.198321    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.199090    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.200887    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.201755    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.202624    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:10.198321    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.199090    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.200887    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.201755    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.202624    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:10.207502   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:10.207515   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:10.233202   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:10.233298   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:10.259818   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:10.259845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:10.286455   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:10.286482   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:10.359430   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:10.359465   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:12.894266   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:12.905675   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:12.905773   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:12.932239   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:12.932259   92925 cri.go:89] found id: ""
	I1213 19:13:12.932267   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:12.932320   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.935869   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:12.935938   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:12.961758   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:12.961778   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:12.961782   92925 cri.go:89] found id: ""
	I1213 19:13:12.961789   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:12.961846   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.965449   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.968967   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:12.969071   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:13.001173   92925 cri.go:89] found id: ""
	I1213 19:13:13.001203   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.001213   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:13.001219   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:13.001333   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:13.029728   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:13.029751   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:13.029756   92925 cri.go:89] found id: ""
	I1213 19:13:13.029764   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:13.029818   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.033632   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.037474   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:13.037598   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:13.064000   92925 cri.go:89] found id: ""
	I1213 19:13:13.064025   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.064034   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:13.064040   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:13.064151   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:13.092827   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:13.092847   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:13.092852   92925 cri.go:89] found id: ""
	I1213 19:13:13.092859   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:13.092913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.097637   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.102128   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:13.102195   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:13.132820   92925 cri.go:89] found id: ""
	I1213 19:13:13.132891   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.132912   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:13.132934   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:13.132976   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:13.200851   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:13.200889   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:13.232573   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:13.232603   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:13.325521   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:13.325556   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:13.338293   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:13.338324   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:13.369921   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:13.369950   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:13.416445   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:13.416477   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:13.443214   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:13.443243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:13.468415   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:13.468448   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:13.553200   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:13.553248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:13.596683   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:13.596717   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:13.678127   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:13.669907    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.670748    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672392    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672709    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.674262    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:13.669907    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.670748    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672392    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672709    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.674262    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:13.678150   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:13.678167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.227377   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:16.238613   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:16.238685   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:16.271628   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:16.271652   92925 cri.go:89] found id: ""
	I1213 19:13:16.271661   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:16.271717   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.275571   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:16.275645   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:16.304819   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:16.304843   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.304848   92925 cri.go:89] found id: ""
	I1213 19:13:16.304856   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:16.304911   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.308802   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.312668   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:16.312741   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:16.347113   92925 cri.go:89] found id: ""
	I1213 19:13:16.347137   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.347146   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:16.347153   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:16.347209   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:16.380339   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:16.380362   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:16.380368   92925 cri.go:89] found id: ""
	I1213 19:13:16.380376   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:16.380433   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.383986   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.387756   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:16.387876   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:16.419309   92925 cri.go:89] found id: ""
	I1213 19:13:16.419344   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.419353   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:16.419359   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:16.419427   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:16.447987   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:16.448019   92925 cri.go:89] found id: ""
	I1213 19:13:16.448028   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:16.448093   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.452467   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:16.452551   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:16.478206   92925 cri.go:89] found id: ""
	I1213 19:13:16.478271   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.478298   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:16.478319   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:16.478361   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:16.505859   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:16.505891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:16.547050   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:16.547085   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.591041   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:16.591074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:16.659418   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:16.659502   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:16.686174   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:16.686202   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:16.763753   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:16.763792   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:16.795967   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:16.795996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:16.909202   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:16.909246   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:16.921936   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:16.921962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:16.996415   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:16.987820    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.988740    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990501    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990844    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.992387    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:16.987820    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.988740    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990501    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990844    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.992387    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:16.996438   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:16.996452   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:19.525182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:19.536170   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:19.536246   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:19.563344   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:19.563368   92925 cri.go:89] found id: ""
	I1213 19:13:19.563377   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:19.563432   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.567191   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:19.567263   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:19.594906   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:19.594926   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:19.594936   92925 cri.go:89] found id: ""
	I1213 19:13:19.594944   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:19.595012   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.599420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.603163   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:19.603240   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:19.636656   92925 cri.go:89] found id: ""
	I1213 19:13:19.636681   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.636690   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:19.636696   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:19.636753   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:19.667204   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:19.667274   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:19.667292   92925 cri.go:89] found id: ""
	I1213 19:13:19.667316   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:19.667395   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.671184   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.674972   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:19.675041   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:19.704947   92925 cri.go:89] found id: ""
	I1213 19:13:19.704971   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.704980   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:19.704988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:19.705073   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:19.730669   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:19.730691   92925 cri.go:89] found id: ""
	I1213 19:13:19.730699   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:19.730771   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.735384   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:19.735477   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:19.760611   92925 cri.go:89] found id: ""
	I1213 19:13:19.760634   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.760643   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:19.760669   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:19.760686   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:19.788592   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:19.788621   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:19.882694   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:19.882730   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:19.954514   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:19.946675    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.947253    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.948589    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.949210    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.950900    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:19.946675    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.947253    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.948589    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.949210    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.950900    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:19.954535   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:19.954550   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:19.980616   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:19.980694   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:20.035895   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:20.035930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:20.104716   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:20.104768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:20.199665   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:20.199701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:20.234652   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:20.234680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:20.248416   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:20.248444   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:20.296588   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:20.296624   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:22.824017   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:22.838193   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:22.838267   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:22.874481   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:22.874503   92925 cri.go:89] found id: ""
	I1213 19:13:22.874512   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:22.874578   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.878378   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:22.878467   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:22.907053   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:22.907075   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:22.907079   92925 cri.go:89] found id: ""
	I1213 19:13:22.907086   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:22.907143   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.911144   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.914933   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:22.915007   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:22.942646   92925 cri.go:89] found id: ""
	I1213 19:13:22.942714   92925 logs.go:282] 0 containers: []
	W1213 19:13:22.942729   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:22.942736   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:22.942797   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:22.969713   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:22.969735   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:22.969740   92925 cri.go:89] found id: ""
	I1213 19:13:22.969748   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:22.969804   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.973708   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.977426   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:22.977514   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:23.007912   92925 cri.go:89] found id: ""
	I1213 19:13:23.007939   92925 logs.go:282] 0 containers: []
	W1213 19:13:23.007948   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:23.007955   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:23.008018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:23.040260   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:23.040284   92925 cri.go:89] found id: ""
	I1213 19:13:23.040293   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:23.040348   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:23.044273   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:23.044348   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:23.073414   92925 cri.go:89] found id: ""
	I1213 19:13:23.073445   92925 logs.go:282] 0 containers: []
	W1213 19:13:23.073454   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:23.073466   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:23.073478   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:23.147486   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:23.147526   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:23.180397   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:23.180426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:23.262279   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:23.253482    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.254529    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.255324    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.256834    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.257439    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:23.253482    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.254529    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.255324    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.256834    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.257439    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:23.262302   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:23.262318   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:23.288912   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:23.288942   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:23.328328   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:23.328366   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:23.421984   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:23.422020   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:23.524961   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:23.524997   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:23.542790   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:23.542821   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:23.591486   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:23.591522   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:23.621748   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:23.621777   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.152673   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:26.164673   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:26.164740   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:26.192010   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:26.192031   92925 cri.go:89] found id: ""
	I1213 19:13:26.192040   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:26.192095   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.195849   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:26.195918   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:26.224593   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:26.224657   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:26.224677   92925 cri.go:89] found id: ""
	I1213 19:13:26.224702   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:26.224772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.228545   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.231970   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:26.232086   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:26.259044   92925 cri.go:89] found id: ""
	I1213 19:13:26.259066   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.259075   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:26.259080   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:26.259137   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:26.287771   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:26.287793   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:26.287798   92925 cri.go:89] found id: ""
	I1213 19:13:26.287805   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:26.287861   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.293156   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.296722   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:26.296805   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:26.323701   92925 cri.go:89] found id: ""
	I1213 19:13:26.323731   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.323746   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:26.323753   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:26.323820   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:26.350119   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.350137   92925 cri.go:89] found id: ""
	I1213 19:13:26.350145   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:26.350199   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.353849   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:26.353916   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:26.380009   92925 cri.go:89] found id: ""
	I1213 19:13:26.380035   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.380044   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:26.380053   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:26.380065   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:26.438029   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:26.438062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:26.475066   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:26.475096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:26.507857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:26.507887   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:26.521466   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:26.521493   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:26.565942   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:26.565983   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:26.634647   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:26.634680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.662943   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:26.662972   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:26.737712   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:26.737749   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:26.840754   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:26.840792   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:26.911511   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:26.903881    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.904637    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906164    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906441    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.907906    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:26.903881    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.904637    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906164    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906441    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.907906    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:26.911534   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:26.911547   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.438403   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:29.449664   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:29.449742   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:29.477323   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.477342   92925 cri.go:89] found id: ""
	I1213 19:13:29.477351   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:29.477405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.480946   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:29.481052   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:29.515446   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:29.515469   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:29.515473   92925 cri.go:89] found id: ""
	I1213 19:13:29.515480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:29.515537   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.520209   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.523894   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:29.523994   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:29.550207   92925 cri.go:89] found id: ""
	I1213 19:13:29.550232   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.550242   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:29.550272   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:29.550349   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:29.576154   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:29.576177   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:29.576182   92925 cri.go:89] found id: ""
	I1213 19:13:29.576195   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:29.576267   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.580154   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.583801   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:29.583876   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:29.613771   92925 cri.go:89] found id: ""
	I1213 19:13:29.613795   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.613805   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:29.613810   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:29.613872   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:29.640080   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:29.640103   92925 cri.go:89] found id: ""
	I1213 19:13:29.640112   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:29.640167   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.643810   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:29.643883   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:29.674496   92925 cri.go:89] found id: ""
	I1213 19:13:29.674567   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.674583   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:29.674592   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:29.674616   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.704354   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:29.704383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:29.760688   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:29.760724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:29.789616   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:29.789644   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:29.817300   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:29.817328   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:29.848838   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:29.848866   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:29.949492   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:29.949527   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:30.081487   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:30.081528   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:30.170948   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:30.170989   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:30.251666   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:30.251705   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:30.265404   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:30.265433   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:30.340984   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:30.332491    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.333283    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335347    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335760    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.337330    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:30.332491    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.333283    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335347    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335760    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.337330    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:32.841244   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:32.851830   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:32.851904   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:32.878262   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:32.878282   92925 cri.go:89] found id: ""
	I1213 19:13:32.878290   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:32.878345   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.881794   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:32.881871   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:32.908784   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:32.908807   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:32.908812   92925 cri.go:89] found id: ""
	I1213 19:13:32.908819   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:32.908877   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.913113   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.916615   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:32.916713   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:32.945436   92925 cri.go:89] found id: ""
	I1213 19:13:32.945460   92925 logs.go:282] 0 containers: []
	W1213 19:13:32.945468   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:32.945474   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:32.945532   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:32.972389   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:32.972409   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:32.972414   92925 cri.go:89] found id: ""
	I1213 19:13:32.972421   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:32.972496   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.976105   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.979491   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:32.979558   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:33.013568   92925 cri.go:89] found id: ""
	I1213 19:13:33.013590   92925 logs.go:282] 0 containers: []
	W1213 19:13:33.013598   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:33.013604   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:33.013662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:33.041534   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:33.041557   92925 cri.go:89] found id: ""
	I1213 19:13:33.041566   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:33.041622   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:33.045294   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:33.045445   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:33.074126   92925 cri.go:89] found id: ""
	I1213 19:13:33.074196   92925 logs.go:282] 0 containers: []
	W1213 19:13:33.074224   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:33.074248   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:33.074274   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:33.108085   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:33.108112   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:33.196053   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:33.196096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:33.238729   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:33.238801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:33.334220   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:33.334258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:33.347401   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:33.347431   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:33.415328   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:33.415362   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:33.444593   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:33.444672   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:33.519042   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:33.509468    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.510273    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.511953    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.512620    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.513636    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:33.509468    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.510273    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.511953    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.512620    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.513636    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:33.519066   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:33.519078   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:33.546564   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:33.546593   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:33.588382   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:33.588418   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.135267   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:36.146588   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:36.146662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:36.173719   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:36.173741   92925 cri.go:89] found id: ""
	I1213 19:13:36.173750   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:36.173821   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.177610   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:36.177680   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:36.204513   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:36.204536   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.204540   92925 cri.go:89] found id: ""
	I1213 19:13:36.204548   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:36.204602   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.208516   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.211831   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:36.211901   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:36.243167   92925 cri.go:89] found id: ""
	I1213 19:13:36.243194   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.243205   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:36.243211   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:36.243271   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:36.272787   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:36.272812   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:36.272817   92925 cri.go:89] found id: ""
	I1213 19:13:36.272825   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:36.272880   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.276627   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.280060   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:36.280182   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:36.309203   92925 cri.go:89] found id: ""
	I1213 19:13:36.309231   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.309242   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:36.309248   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:36.309310   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:36.342531   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:36.342554   92925 cri.go:89] found id: ""
	I1213 19:13:36.342563   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:36.342631   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.346318   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:36.346392   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:36.374406   92925 cri.go:89] found id: ""
	I1213 19:13:36.374442   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.374467   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:36.374485   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:36.374497   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:36.474302   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:36.474340   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:36.557406   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:36.549415    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.550022    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551319    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551900    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.553579    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:36.549415    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.550022    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551319    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551900    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.553579    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:36.557430   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:36.557443   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:36.583387   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:36.583415   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:36.623378   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:36.623413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.666931   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:36.666964   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:36.696482   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:36.696513   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:36.730677   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:36.730708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:36.743357   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:36.743386   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:36.813864   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:36.813900   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:36.848686   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:36.848716   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:39.433464   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:39.444066   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:39.444136   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:39.471666   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:39.471686   92925 cri.go:89] found id: ""
	I1213 19:13:39.471693   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:39.471753   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.475549   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:39.475641   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:39.505541   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:39.505615   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:39.505645   92925 cri.go:89] found id: ""
	I1213 19:13:39.505667   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:39.505752   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.511310   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.515781   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:39.515898   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:39.545256   92925 cri.go:89] found id: ""
	I1213 19:13:39.545290   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.545300   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:39.545306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:39.545379   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:39.576057   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:39.576080   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:39.576085   92925 cri.go:89] found id: ""
	I1213 19:13:39.576092   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:39.576146   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.580177   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.584087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:39.584160   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:39.610819   92925 cri.go:89] found id: ""
	I1213 19:13:39.610843   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.610863   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:39.610871   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:39.610929   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:39.638458   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:39.638481   92925 cri.go:89] found id: ""
	I1213 19:13:39.638503   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:39.638564   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.642537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:39.642610   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:39.670872   92925 cri.go:89] found id: ""
	I1213 19:13:39.670951   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.670975   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:39.670998   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:39.671043   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:39.774702   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:39.774743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:39.846826   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:39.837968    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.838545    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.840574    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.841359    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.842988    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:39.837968    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.838545    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.840574    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.841359    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.842988    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:39.846849   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:39.846862   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:39.892712   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:39.892743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:39.960690   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:39.960729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:40.022528   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:40.022560   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:40.107424   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:40.107461   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:40.149433   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:40.149472   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:40.162446   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:40.162479   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:40.191980   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:40.192009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:40.239148   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:40.239228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:42.771936   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:42.782654   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:42.782726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:42.808850   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:42.808869   92925 cri.go:89] found id: ""
	I1213 19:13:42.808877   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:42.808938   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.812682   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:42.812753   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:42.840980   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:42.841072   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:42.841097   92925 cri.go:89] found id: ""
	I1213 19:13:42.841122   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:42.841210   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.844946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.848726   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:42.848811   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:42.888597   92925 cri.go:89] found id: ""
	I1213 19:13:42.888663   92925 logs.go:282] 0 containers: []
	W1213 19:13:42.888688   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:42.888707   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:42.888791   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:42.916253   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:42.916323   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:42.916341   92925 cri.go:89] found id: ""
	I1213 19:13:42.916364   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:42.916443   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.920031   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.923493   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:42.923565   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:42.950967   92925 cri.go:89] found id: ""
	I1213 19:13:42.950991   92925 logs.go:282] 0 containers: []
	W1213 19:13:42.950999   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:42.951005   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:42.951062   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:42.977861   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:42.977884   92925 cri.go:89] found id: ""
	I1213 19:13:42.977892   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:42.977946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.985150   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:42.985252   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:43.014767   92925 cri.go:89] found id: ""
	I1213 19:13:43.014794   92925 logs.go:282] 0 containers: []
	W1213 19:13:43.014803   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:43.014813   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:43.014826   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:43.089031   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:43.089070   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:43.152812   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:43.152840   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:43.253685   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:43.253720   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:43.268102   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:43.268130   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:43.342529   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:43.333442    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.333905    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.335923    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.336467    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.338397    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:43.333442    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.333905    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.335923    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.336467    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.338397    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:43.342553   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:43.342566   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:43.383957   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:43.383996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:43.431627   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:43.431662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:43.504349   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:43.504386   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:43.541135   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:43.541167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:43.570288   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:43.570315   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.101243   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:46.114537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:46.114605   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:46.142285   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:46.142310   92925 cri.go:89] found id: ""
	I1213 19:13:46.142319   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:46.142374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.146198   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:46.146275   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:46.172413   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:46.172485   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:46.172504   92925 cri.go:89] found id: ""
	I1213 19:13:46.172529   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:46.172649   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.176629   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.180398   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:46.180514   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:46.208892   92925 cri.go:89] found id: ""
	I1213 19:13:46.208925   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.208934   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:46.208942   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:46.209074   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:46.237365   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:46.237388   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:46.237394   92925 cri.go:89] found id: ""
	I1213 19:13:46.237401   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:46.237458   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.241815   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.245384   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:46.245482   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:46.272996   92925 cri.go:89] found id: ""
	I1213 19:13:46.273063   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.273072   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:46.273078   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:46.273160   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:46.302629   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.302654   92925 cri.go:89] found id: ""
	I1213 19:13:46.302663   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:46.302737   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.306762   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:46.306861   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:46.337280   92925 cri.go:89] found id: ""
	I1213 19:13:46.337346   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.337369   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:46.337384   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:46.337395   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:46.349174   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:46.349204   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:46.419942   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:46.411077    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.411612    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413348    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413991    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.415827    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:46.411077    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.411612    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413348    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413991    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.415827    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:46.419977   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:46.419993   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:46.446859   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:46.446885   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:46.487087   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:46.487124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:46.547232   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:46.547267   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:46.574826   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:46.574854   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.602584   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:46.602609   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:46.640086   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:46.640117   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:46.740777   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:46.740818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:46.812315   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:46.812357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:49.395199   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:49.405934   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:49.406009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:49.433789   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:49.433810   92925 cri.go:89] found id: ""
	I1213 19:13:49.433827   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:49.433883   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.437578   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:49.437651   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:49.471711   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:49.471734   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:49.471740   92925 cri.go:89] found id: ""
	I1213 19:13:49.471748   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:49.471801   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.475461   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.479094   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:49.479168   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:49.505391   92925 cri.go:89] found id: ""
	I1213 19:13:49.505417   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.505426   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:49.505433   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:49.505488   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:49.540863   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:49.540890   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:49.540895   92925 cri.go:89] found id: ""
	I1213 19:13:49.540903   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:49.540960   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.544771   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.548451   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:49.548524   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:49.575402   92925 cri.go:89] found id: ""
	I1213 19:13:49.575428   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.575436   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:49.575442   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:49.575501   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:49.605123   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:49.605143   92925 cri.go:89] found id: ""
	I1213 19:13:49.605151   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:49.605211   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.608919   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:49.609061   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:49.637050   92925 cri.go:89] found id: ""
	I1213 19:13:49.637075   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.637084   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:49.637093   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:49.637105   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:49.744000   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:49.744048   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:49.811345   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:49.802050    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.802444    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805468    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805922    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.807507    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:49.802050    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.802444    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805468    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805922    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.807507    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:49.811370   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:49.811384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:49.852043   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:49.852081   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:49.896314   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:49.896349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:49.924211   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:49.924240   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:50.006219   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:50.006263   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:50.039895   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:50.039978   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:50.054629   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:50.054656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:50.084937   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:50.084966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:50.159510   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:50.159553   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:52.688326   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:52.699486   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:52.699554   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:52.726195   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:52.726216   92925 cri.go:89] found id: ""
	I1213 19:13:52.726224   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:52.726280   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.730715   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:52.730785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:52.756911   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:52.756933   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:52.756938   92925 cri.go:89] found id: ""
	I1213 19:13:52.756946   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:52.757069   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.760788   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.764452   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:52.764551   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:52.790658   92925 cri.go:89] found id: ""
	I1213 19:13:52.790732   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.790749   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:52.790756   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:52.790816   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:52.818365   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:52.818388   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:52.818394   92925 cri.go:89] found id: ""
	I1213 19:13:52.818402   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:52.818477   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.822460   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.826054   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:52.826130   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:52.853218   92925 cri.go:89] found id: ""
	I1213 19:13:52.853245   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.853256   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:52.853262   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:52.853321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:52.879712   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:52.879736   92925 cri.go:89] found id: ""
	I1213 19:13:52.879744   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:52.879798   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.883563   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:52.883639   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:52.910499   92925 cri.go:89] found id: ""
	I1213 19:13:52.910526   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.910535   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:52.910545   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:52.910577   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:52.990183   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:52.990219   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:53.026776   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:53.026805   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:53.118043   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:53.107629    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.110332    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.111160    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.112144    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.113182    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:53.107629    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.110332    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.111160    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.112144    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.113182    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:53.118090   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:53.118141   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:53.160995   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:53.161190   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:53.204763   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:53.204795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:53.270772   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:53.270810   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:53.370857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:53.370895   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:53.383046   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:53.383074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:53.410648   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:53.410684   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:53.439739   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:53.439768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:55.970243   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:55.981613   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:55.981689   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:56.018614   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:56.018637   92925 cri.go:89] found id: ""
	I1213 19:13:56.018647   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:56.018707   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.022914   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:56.022990   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:56.056158   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:56.056182   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:56.056187   92925 cri.go:89] found id: ""
	I1213 19:13:56.056194   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:56.056275   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.061504   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.065201   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:56.065281   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:56.094861   92925 cri.go:89] found id: ""
	I1213 19:13:56.094887   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.094896   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:56.094903   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:56.094982   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:56.133165   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:56.133240   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:56.133260   92925 cri.go:89] found id: ""
	I1213 19:13:56.133291   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:56.133356   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.137225   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.140713   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:56.140785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:56.168013   92925 cri.go:89] found id: ""
	I1213 19:13:56.168039   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.168048   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:56.168055   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:56.168118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:56.196793   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:56.196867   92925 cri.go:89] found id: ""
	I1213 19:13:56.196876   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:56.196935   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.200591   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:56.200672   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:56.227851   92925 cri.go:89] found id: ""
	I1213 19:13:56.227877   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.227887   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:56.227896   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:56.227908   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:56.323380   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:56.323416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:56.337259   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:56.337289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:56.362908   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:56.362939   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:56.443333   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:56.443372   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:56.522467   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:56.511318    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.512215    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.514040    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.515835    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.516378    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:56.511318    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.512215    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.514040    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.515835    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.516378    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:56.522485   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:56.522498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:56.561809   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:56.561843   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:56.606943   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:56.606979   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:56.678268   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:56.678310   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:56.707280   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:56.707309   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:56.736890   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:56.736917   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:59.286954   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:59.298376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:59.298447   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:59.325376   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:59.325399   92925 cri.go:89] found id: ""
	I1213 19:13:59.325407   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:59.325464   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.329049   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:59.329123   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:59.356066   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:59.356085   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:59.356089   92925 cri.go:89] found id: ""
	I1213 19:13:59.356097   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:59.356150   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.360113   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.363660   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:59.363736   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:59.389568   92925 cri.go:89] found id: ""
	I1213 19:13:59.389594   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.389604   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:59.389611   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:59.389692   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:59.423243   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:59.423266   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:59.423270   92925 cri.go:89] found id: ""
	I1213 19:13:59.423278   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:59.423350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.426944   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.431770   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:59.431844   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:59.458103   92925 cri.go:89] found id: ""
	I1213 19:13:59.458173   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.458220   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:59.458246   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:59.458332   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:59.487250   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:59.487324   92925 cri.go:89] found id: ""
	I1213 19:13:59.487340   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:59.487406   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.491784   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:59.491852   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:59.525717   92925 cri.go:89] found id: ""
	I1213 19:13:59.525739   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.525748   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:59.525756   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:59.525768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:59.554063   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:59.554091   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:59.599874   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:59.599909   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:59.626733   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:59.626765   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:59.700778   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:59.700814   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:59.713358   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:59.713388   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:59.783137   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:59.774677   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.775356   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.776867   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.777580   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.778486   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:59.774677   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.775356   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.776867   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.777580   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.778486   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:59.783158   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:59.783169   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:59.832218   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:59.832248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:59.901253   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:59.901329   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:59.930678   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:59.930701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:59.962070   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:59.962099   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:02.744450   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:02.755514   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:02.755587   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:02.782984   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:02.783079   92925 cri.go:89] found id: ""
	I1213 19:14:02.783095   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:02.783157   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.787187   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:02.787262   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:02.814931   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:02.814954   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:02.814959   92925 cri.go:89] found id: ""
	I1213 19:14:02.814967   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:02.815031   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.818983   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.822788   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:02.822865   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:02.848942   92925 cri.go:89] found id: ""
	I1213 19:14:02.848966   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.848975   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:02.848991   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:02.849096   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:02.876134   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:02.876155   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:02.876160   92925 cri.go:89] found id: ""
	I1213 19:14:02.876168   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:02.876249   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.880576   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.885335   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:02.885459   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:02.913660   92925 cri.go:89] found id: ""
	I1213 19:14:02.913733   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.913763   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:02.913802   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:02.913924   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:02.940178   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:02.940248   92925 cri.go:89] found id: ""
	I1213 19:14:02.940270   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:02.940359   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.944376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:02.944500   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:02.975815   92925 cri.go:89] found id: ""
	I1213 19:14:02.975838   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.975846   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:02.975855   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:02.975867   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:03.074688   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:03.074723   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:03.156277   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:03.147816   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.148501   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150174   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150777   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.152270   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:03.147816   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.148501   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150174   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150777   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.152270   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:03.156299   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:03.156311   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:03.182450   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:03.182477   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:03.221147   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:03.221181   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:03.292920   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:03.292962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:03.323958   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:03.323983   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:03.397255   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:03.397289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:03.410296   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:03.410325   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:03.465930   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:03.465966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:03.497989   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:03.498017   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:06.058798   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:06.069576   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:06.069643   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:06.097652   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:06.097675   92925 cri.go:89] found id: ""
	I1213 19:14:06.097684   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:06.097767   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.103860   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:06.103983   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:06.133321   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:06.133354   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:06.133359   92925 cri.go:89] found id: ""
	I1213 19:14:06.133367   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:06.133434   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.137349   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.140932   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:06.141036   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:06.174768   92925 cri.go:89] found id: ""
	I1213 19:14:06.174796   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.174806   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:06.174813   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:06.174923   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:06.202214   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:06.202245   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:06.202249   92925 cri.go:89] found id: ""
	I1213 19:14:06.202257   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:06.202315   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.206201   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.209869   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:06.209950   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:06.240738   92925 cri.go:89] found id: ""
	I1213 19:14:06.240762   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.240771   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:06.240777   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:06.240838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:06.267045   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:06.267067   92925 cri.go:89] found id: ""
	I1213 19:14:06.267076   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:06.267134   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.270950   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:06.271059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:06.298538   92925 cri.go:89] found id: ""
	I1213 19:14:06.298566   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.298576   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:06.298585   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:06.298600   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:06.401303   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:06.401348   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:06.414599   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:06.414631   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:06.441984   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:06.442056   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:06.481290   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:06.481321   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:06.541131   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:06.541162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:06.614944   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:06.614978   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:06.700895   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:06.700937   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:06.734007   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:06.734036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:06.804578   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:06.795862   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.796443   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798255   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798765   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.800521   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:06.795862   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.796443   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798255   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798765   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.800521   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:06.804604   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:06.804616   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:06.832247   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:06.832275   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.358770   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:09.369376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:09.369446   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:09.397174   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:09.397250   92925 cri.go:89] found id: ""
	I1213 19:14:09.397268   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:09.397341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.401282   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:09.401379   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:09.430806   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:09.430829   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:09.430834   92925 cri.go:89] found id: ""
	I1213 19:14:09.430842   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:09.430895   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.434593   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.437861   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:09.437931   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:09.462972   92925 cri.go:89] found id: ""
	I1213 19:14:09.463040   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.463067   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:09.463087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:09.463154   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:09.489906   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:09.489930   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:09.489935   92925 cri.go:89] found id: ""
	I1213 19:14:09.489943   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:09.490000   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.493996   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.497780   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:09.497895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:09.529207   92925 cri.go:89] found id: ""
	I1213 19:14:09.529232   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.529241   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:09.529280   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:09.529364   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:09.556267   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.556289   92925 cri.go:89] found id: ""
	I1213 19:14:09.556297   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:09.556383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.560687   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:09.560770   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:09.592345   92925 cri.go:89] found id: ""
	I1213 19:14:09.592380   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.592389   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:09.592398   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:09.592410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:09.604889   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:09.604917   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:09.631468   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:09.631498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:09.670679   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:09.670712   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:09.715815   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:09.715851   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.743494   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:09.743523   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:09.775725   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:09.775753   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:09.873965   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:09.874039   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:09.959605   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:09.948036   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.948708   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950229   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950803   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.952453   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:09.948036   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.948708   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950229   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950803   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.952453   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:09.959680   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:09.959707   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:10.051190   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:10.051228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:10.086712   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:10.086738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:12.672644   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:12.683960   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:12.684058   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:12.712689   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:12.712710   92925 cri.go:89] found id: ""
	I1213 19:14:12.712718   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:12.712772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.716732   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:12.716806   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:12.744449   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:12.744468   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:12.744473   92925 cri.go:89] found id: ""
	I1213 19:14:12.744480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:12.744548   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.748558   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.752120   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:12.752195   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:12.779575   92925 cri.go:89] found id: ""
	I1213 19:14:12.779602   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.779611   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:12.779617   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:12.779677   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:12.808259   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:12.808279   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:12.808284   92925 cri.go:89] found id: ""
	I1213 19:14:12.808292   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:12.808348   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.812274   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.816250   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:12.816380   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:12.842528   92925 cri.go:89] found id: ""
	I1213 19:14:12.842556   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.842566   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:12.842572   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:12.842655   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:12.870846   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:12.870916   92925 cri.go:89] found id: ""
	I1213 19:14:12.870939   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:12.871003   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.874709   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:12.874809   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:12.901168   92925 cri.go:89] found id: ""
	I1213 19:14:12.901194   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.901203   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:12.901212   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:12.901224   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:12.993856   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:12.993888   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:13.006289   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:13.006320   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:13.038515   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:13.038544   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:13.101746   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:13.101795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:13.153697   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:13.153736   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:13.183337   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:13.183366   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:13.262960   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:13.262995   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:13.297818   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:13.297845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:13.368622   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:13.360485   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.361349   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363057   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363352   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.364843   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:13.360485   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.361349   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363057   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363352   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.364843   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:13.368650   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:13.368664   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:13.439804   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:13.439843   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:15.976229   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:15.989077   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:15.989247   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:16.020054   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:16.020079   92925 cri.go:89] found id: ""
	I1213 19:14:16.020087   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:16.020158   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.024026   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:16.024118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:16.051647   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:16.051670   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:16.051681   92925 cri.go:89] found id: ""
	I1213 19:14:16.051688   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:16.051772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.055489   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.059115   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:16.059234   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:16.086414   92925 cri.go:89] found id: ""
	I1213 19:14:16.086438   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.086447   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:16.086453   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:16.086513   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:16.118349   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:16.118415   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:16.118434   92925 cri.go:89] found id: ""
	I1213 19:14:16.118458   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:16.118545   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.122398   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.129488   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:16.129561   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:16.156699   92925 cri.go:89] found id: ""
	I1213 19:14:16.156725   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.156734   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:16.156740   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:16.156799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:16.183419   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:16.183444   92925 cri.go:89] found id: ""
	I1213 19:14:16.183465   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:16.183520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.187500   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:16.187599   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:16.213532   92925 cri.go:89] found id: ""
	I1213 19:14:16.213610   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.213634   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:16.213657   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:16.213703   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:16.225956   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:16.225985   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:16.299377   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:16.290117   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.291089   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.292835   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.293694   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.295412   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:16.290117   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.291089   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.292835   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.293694   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.295412   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:16.299401   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:16.299416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:16.327259   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:16.327288   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:16.353346   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:16.353376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:16.380053   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:16.380079   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:16.415886   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:16.415918   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:16.512571   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:16.512605   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:16.557415   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:16.557451   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:16.616391   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:16.616424   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:16.692096   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:16.692131   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:19.277525   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:19.287988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:19.288109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:19.314035   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:19.314055   92925 cri.go:89] found id: ""
	I1213 19:14:19.314064   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:19.314137   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.317785   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:19.317856   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:19.344128   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:19.344151   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:19.344155   92925 cri.go:89] found id: ""
	I1213 19:14:19.344163   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:19.344216   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.348619   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.351872   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:19.351961   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:19.377237   92925 cri.go:89] found id: ""
	I1213 19:14:19.377263   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.377272   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:19.377278   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:19.377360   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:19.404210   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:19.404233   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:19.404238   92925 cri.go:89] found id: ""
	I1213 19:14:19.404245   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:19.404318   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.407909   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.411268   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:19.411336   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:19.437051   92925 cri.go:89] found id: ""
	I1213 19:14:19.437075   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.437083   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:19.437089   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:19.437147   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:19.461816   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:19.461847   92925 cri.go:89] found id: ""
	I1213 19:14:19.461856   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:19.461911   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.465492   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:19.465587   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:19.491501   92925 cri.go:89] found id: ""
	I1213 19:14:19.491527   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.491536   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:19.491545   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:19.491588   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:19.530624   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:19.530652   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:19.570388   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:19.570423   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:19.649601   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:19.649638   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:19.682548   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:19.682579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:19.765347   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:19.765383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:19.797401   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:19.797430   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:19.892983   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:19.893036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:19.905252   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:19.905281   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:19.976038   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:19.968048   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.968518   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.969788   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.970473   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.972132   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:19.968048   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.968518   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.969788   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.970473   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.972132   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:19.976061   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:19.976074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:20.015893   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:20.015932   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:22.580793   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:22.591726   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:22.591801   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:22.617941   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:22.617972   92925 cri.go:89] found id: ""
	I1213 19:14:22.617981   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:22.618039   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.621895   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:22.621967   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:22.648715   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:22.648778   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:22.648797   92925 cri.go:89] found id: ""
	I1213 19:14:22.648821   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:22.648904   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.653305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.657032   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:22.657104   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:22.686906   92925 cri.go:89] found id: ""
	I1213 19:14:22.686932   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.686946   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:22.686952   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:22.687013   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:22.714929   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:22.714951   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:22.714956   92925 cri.go:89] found id: ""
	I1213 19:14:22.714964   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:22.715025   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.719071   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.722714   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:22.722784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:22.750440   92925 cri.go:89] found id: ""
	I1213 19:14:22.750470   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.750480   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:22.750486   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:22.750549   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:22.777550   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:22.777572   92925 cri.go:89] found id: ""
	I1213 19:14:22.777580   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:22.777635   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.781380   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:22.781475   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:22.816511   92925 cri.go:89] found id: ""
	I1213 19:14:22.816537   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.816547   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:22.816572   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:22.816617   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:22.842295   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:22.842322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:22.882060   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:22.882095   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:22.965336   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:22.965374   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:22.995696   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:22.995731   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:23.098694   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:23.098782   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:23.117712   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:23.117743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:23.167456   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:23.167497   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:23.195171   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:23.195199   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:23.279228   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:23.279264   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:23.318709   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:23.318738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:23.384532   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:23.376056   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.376628   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.378283   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379367   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379806   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:23.376056   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.376628   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.378283   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379367   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379806   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:25.885566   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:25.896623   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:25.896696   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:25.924503   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:25.924535   92925 cri.go:89] found id: ""
	I1213 19:14:25.924544   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:25.924601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.928341   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:25.928413   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:25.966385   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:25.966404   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:25.966409   92925 cri.go:89] found id: ""
	I1213 19:14:25.966417   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:25.966471   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.970190   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.974101   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:25.974229   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:26.004380   92925 cri.go:89] found id: ""
	I1213 19:14:26.004456   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.004479   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:26.004498   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:26.004595   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:26.031828   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:26.031853   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:26.031860   92925 cri.go:89] found id: ""
	I1213 19:14:26.031868   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:26.031925   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.036387   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.040161   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:26.040235   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:26.070525   92925 cri.go:89] found id: ""
	I1213 19:14:26.070591   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.070616   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:26.070635   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:26.070724   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:26.108253   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:26.108277   92925 cri.go:89] found id: ""
	I1213 19:14:26.108294   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:26.108373   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.112191   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:26.112324   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:26.146018   92925 cri.go:89] found id: ""
	I1213 19:14:26.146042   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.146052   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:26.146060   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:26.146094   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:26.187197   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:26.187229   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:26.232694   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:26.232724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:26.310398   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:26.310435   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:26.323748   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:26.323775   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:26.350662   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:26.350689   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:26.380636   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:26.380707   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:26.407064   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:26.407089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:26.483950   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:26.483984   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:26.536817   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:26.536846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:26.654750   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:26.654801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:26.733679   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:26.725319   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.726046   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.727714   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.728228   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.729870   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:26.725319   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.726046   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.727714   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.728228   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.729870   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:29.233968   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:29.244666   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:29.244746   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:29.272994   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:29.273043   92925 cri.go:89] found id: ""
	I1213 19:14:29.273051   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:29.273108   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.277950   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:29.278022   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:29.304315   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:29.304334   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:29.304338   92925 cri.go:89] found id: ""
	I1213 19:14:29.304346   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:29.304402   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.308379   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.311905   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:29.311974   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:29.337925   92925 cri.go:89] found id: ""
	I1213 19:14:29.337953   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.337962   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:29.337968   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:29.338028   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:29.365135   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:29.365156   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:29.365160   92925 cri.go:89] found id: ""
	I1213 19:14:29.365167   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:29.365222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.368867   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.372263   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:29.372334   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:29.403367   92925 cri.go:89] found id: ""
	I1213 19:14:29.403393   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.403402   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:29.403408   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:29.403466   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:29.429639   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:29.429703   92925 cri.go:89] found id: ""
	I1213 19:14:29.429718   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:29.429782   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.433301   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:29.433373   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:29.460244   92925 cri.go:89] found id: ""
	I1213 19:14:29.460272   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.460282   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:29.460291   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:29.460302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:29.555127   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:29.555166   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:29.583790   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:29.583827   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:29.646377   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:29.646409   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:29.720554   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:29.720592   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:29.751659   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:29.751686   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:29.788857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:29.788883   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:29.800809   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:29.800844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:29.869250   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:29.862112   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.862682   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864146   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864555   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.865755   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:29.862112   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.862682   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864146   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864555   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.865755   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:29.869274   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:29.869287   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:29.913688   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:29.913724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:29.956382   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:29.956408   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:32.553678   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:32.565396   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:32.565470   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:32.592588   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:32.592613   92925 cri.go:89] found id: ""
	I1213 19:14:32.592622   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:32.592684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.596429   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:32.596509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:32.624469   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:32.624493   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:32.624499   92925 cri.go:89] found id: ""
	I1213 19:14:32.624506   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:32.624559   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.628270   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.631873   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:32.632003   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:32.657120   92925 cri.go:89] found id: ""
	I1213 19:14:32.657144   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.657153   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:32.657159   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:32.657220   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:32.684878   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:32.684901   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:32.684906   92925 cri.go:89] found id: ""
	I1213 19:14:32.684914   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:32.684976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.689235   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.692754   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:32.692825   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:32.722855   92925 cri.go:89] found id: ""
	I1213 19:14:32.722878   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.722887   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:32.722893   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:32.722952   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:32.753685   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:32.753704   92925 cri.go:89] found id: ""
	I1213 19:14:32.753712   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:32.753764   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.758129   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:32.758214   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:32.784526   92925 cri.go:89] found id: ""
	I1213 19:14:32.784599   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.784623   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:32.784645   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:32.784683   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:32.826015   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:32.826050   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:32.915444   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:32.915483   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:32.943132   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:32.943167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:33.017904   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:33.017945   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:33.050228   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:33.050258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:33.122559   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:33.114436   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.115150   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.116863   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.117500   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.118980   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:33.114436   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.115150   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.116863   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.117500   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.118980   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:33.122583   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:33.122597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:33.177421   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:33.177455   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:33.206989   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:33.207016   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:33.305130   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:33.305169   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:33.319318   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:33.319416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:35.847899   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:35.859028   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:35.859101   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:35.887722   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:35.887745   92925 cri.go:89] found id: ""
	I1213 19:14:35.887754   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:35.887807   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.891699   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:35.891771   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:35.920114   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:35.920138   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:35.920144   92925 cri.go:89] found id: ""
	I1213 19:14:35.920152   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:35.920222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.923937   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.927605   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:35.927678   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:35.953980   92925 cri.go:89] found id: ""
	I1213 19:14:35.954007   92925 logs.go:282] 0 containers: []
	W1213 19:14:35.954016   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:35.954023   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:35.954080   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:35.980645   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:35.980665   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:35.980670   92925 cri.go:89] found id: ""
	I1213 19:14:35.980678   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:35.980742   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.991946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.996641   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:35.996726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:36.026202   92925 cri.go:89] found id: ""
	I1213 19:14:36.026228   92925 logs.go:282] 0 containers: []
	W1213 19:14:36.026238   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:36.026245   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:36.026350   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:36.051979   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:36.052001   92925 cri.go:89] found id: ""
	I1213 19:14:36.052010   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:36.052066   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:36.055868   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:36.055938   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:36.083649   92925 cri.go:89] found id: ""
	I1213 19:14:36.083675   92925 logs.go:282] 0 containers: []
	W1213 19:14:36.083685   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:36.083693   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:36.083704   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:36.164414   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:36.164464   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:36.198766   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:36.198793   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:36.298985   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:36.299028   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:36.346466   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:36.346498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:36.376231   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:36.376258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:36.403571   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:36.403597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:36.417684   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:36.417714   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:36.487562   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:36.479494   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.480246   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.481848   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.482211   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.483808   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:36.479494   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.480246   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.481848   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.482211   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.483808   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:36.487585   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:36.487597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:36.514488   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:36.514514   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:36.559954   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:36.559990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:39.133526   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:39.150754   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:39.150826   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:39.179295   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:39.179315   92925 cri.go:89] found id: ""
	I1213 19:14:39.179324   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:39.179380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.185538   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:39.185605   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:39.216427   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:39.216449   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:39.216454   92925 cri.go:89] found id: ""
	I1213 19:14:39.216462   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:39.216517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.221041   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.225622   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:39.225691   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:39.251922   92925 cri.go:89] found id: ""
	I1213 19:14:39.251946   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.251955   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:39.251961   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:39.252019   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:39.281875   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:39.281900   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:39.281905   92925 cri.go:89] found id: ""
	I1213 19:14:39.281912   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:39.281970   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.286420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.290568   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:39.290663   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:39.315894   92925 cri.go:89] found id: ""
	I1213 19:14:39.315996   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.316021   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:39.316041   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:39.316153   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:39.344960   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:39.344983   92925 cri.go:89] found id: ""
	I1213 19:14:39.344992   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:39.345091   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.348776   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:39.348847   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:39.378840   92925 cri.go:89] found id: ""
	I1213 19:14:39.378862   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.378870   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:39.378879   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:39.378890   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:39.410058   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:39.410087   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:39.510110   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:39.510188   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:39.542821   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:39.542892   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:39.614365   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:39.605214   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.606127   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.607756   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.608303   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.610109   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:39.605214   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.606127   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.607756   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.608303   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.610109   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:39.614387   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:39.614403   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:39.656166   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:39.656199   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:39.700850   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:39.700887   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:39.735225   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:39.735267   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:39.765360   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:39.765396   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:39.856068   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:39.856115   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:39.883708   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:39.883738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.458661   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:42.469945   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:42.470018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:42.497805   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:42.497831   92925 cri.go:89] found id: ""
	I1213 19:14:42.497840   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:42.497898   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.502059   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:42.502128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:42.534485   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:42.534509   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:42.534514   92925 cri.go:89] found id: ""
	I1213 19:14:42.534521   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:42.534578   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.539929   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.544534   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:42.544618   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:42.572959   92925 cri.go:89] found id: ""
	I1213 19:14:42.572983   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.572991   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:42.572998   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:42.573085   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:42.605231   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.605253   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:42.605257   92925 cri.go:89] found id: ""
	I1213 19:14:42.605265   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:42.605324   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.609379   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.613098   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:42.613183   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:42.641856   92925 cri.go:89] found id: ""
	I1213 19:14:42.641881   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.641890   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:42.641897   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:42.641956   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:42.670835   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:42.670862   92925 cri.go:89] found id: ""
	I1213 19:14:42.670870   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:42.670923   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.674669   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:42.674780   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:42.701820   92925 cri.go:89] found id: ""
	I1213 19:14:42.701886   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.701912   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:42.701935   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:42.701974   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:42.795111   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:42.795148   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:42.843272   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:42.843308   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.918660   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:42.918701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:42.953437   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:42.953470   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:42.980705   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:42.980735   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:43.075228   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:43.075266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:43.089833   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:43.089865   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:43.165554   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:43.156189   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.157143   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.158950   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.160521   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.161743   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:43.156189   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.157143   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.158950   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.160521   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.161743   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:43.165619   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:43.165648   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:43.195772   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:43.195850   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:43.266745   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:43.266781   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:45.800090   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:45.811228   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:45.811319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:45.844476   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:45.844562   92925 cri.go:89] found id: ""
	I1213 19:14:45.844585   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:45.844658   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.848635   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:45.848730   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:45.878507   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:45.878532   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:45.878537   92925 cri.go:89] found id: ""
	I1213 19:14:45.878545   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:45.878626   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.883362   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.887015   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:45.887090   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:45.922472   92925 cri.go:89] found id: ""
	I1213 19:14:45.922495   92925 logs.go:282] 0 containers: []
	W1213 19:14:45.922504   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:45.922510   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:45.922571   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:45.961736   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:45.961766   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:45.961772   92925 cri.go:89] found id: ""
	I1213 19:14:45.961779   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:45.961846   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.965883   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.969985   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:45.970062   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:46.005121   92925 cri.go:89] found id: ""
	I1213 19:14:46.005143   92925 logs.go:282] 0 containers: []
	W1213 19:14:46.005153   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:46.005159   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:46.005218   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:46.033851   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:46.033871   92925 cri.go:89] found id: ""
	I1213 19:14:46.033878   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:46.033932   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:46.037737   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:46.037813   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:46.064426   92925 cri.go:89] found id: ""
	I1213 19:14:46.064493   92925 logs.go:282] 0 containers: []
	W1213 19:14:46.064517   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:46.064541   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:46.064580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:46.162246   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:46.162285   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:46.175470   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:46.175500   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:46.249273   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:46.239319   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.240280   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242150   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242816   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.244382   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:46.239319   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.240280   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242150   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242816   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.244382   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:46.249333   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:46.249347   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:46.277985   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:46.278016   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:46.332032   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:46.332065   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:46.376410   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:46.376446   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:46.455695   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:46.455772   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:46.485453   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:46.485479   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:46.522886   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:46.522916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:46.601217   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:46.601253   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:49.142956   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:49.157230   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:49.157309   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:49.185733   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:49.185767   92925 cri.go:89] found id: ""
	I1213 19:14:49.185775   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:49.185830   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.190180   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:49.190249   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:49.218248   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:49.218271   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:49.218276   92925 cri.go:89] found id: ""
	I1213 19:14:49.218285   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:49.218343   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.222331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.226027   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:49.226107   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:49.258473   92925 cri.go:89] found id: ""
	I1213 19:14:49.258496   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.258504   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:49.258512   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:49.258570   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:49.285496   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:49.285560   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:49.285578   92925 cri.go:89] found id: ""
	I1213 19:14:49.285601   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:49.285684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.291508   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.296197   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:49.296358   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:49.325094   92925 cri.go:89] found id: ""
	I1213 19:14:49.325119   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.325127   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:49.325134   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:49.325193   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:49.350750   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:49.350777   92925 cri.go:89] found id: ""
	I1213 19:14:49.350794   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:49.350857   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.354789   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:49.354915   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:49.381275   92925 cri.go:89] found id: ""
	I1213 19:14:49.381302   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.381311   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:49.381320   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:49.381331   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:49.473722   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:49.473760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:49.486016   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:49.486083   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:49.523030   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:49.523060   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:49.602664   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:49.602699   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:49.685307   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:49.685343   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:49.720678   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:49.720706   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:49.787762   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:49.779084   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.779733   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.781504   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.782055   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.783675   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:49.779084   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.779733   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.781504   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.782055   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.783675   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:49.787782   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:49.787795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:49.826153   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:49.826188   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:49.871719   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:49.871752   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:49.902768   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:49.902858   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:52.432900   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:52.443527   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:52.443639   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:52.470204   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:52.470237   92925 cri.go:89] found id: ""
	I1213 19:14:52.470247   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:52.470302   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.473971   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:52.474058   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:52.501963   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:52.501983   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:52.501987   92925 cri.go:89] found id: ""
	I1213 19:14:52.501994   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:52.502048   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.505744   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.509295   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:52.509368   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:52.534850   92925 cri.go:89] found id: ""
	I1213 19:14:52.534917   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.534943   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:52.534959   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:52.535033   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:52.570973   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:52.571045   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:52.571066   92925 cri.go:89] found id: ""
	I1213 19:14:52.571086   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:52.571156   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.574824   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.578317   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:52.578384   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:52.606849   92925 cri.go:89] found id: ""
	I1213 19:14:52.606873   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.606882   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:52.606888   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:52.606945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:52.633073   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:52.633095   92925 cri.go:89] found id: ""
	I1213 19:14:52.633103   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:52.633169   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.636819   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:52.636895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:52.663310   92925 cri.go:89] found id: ""
	I1213 19:14:52.663333   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.663342   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:52.663350   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:52.663363   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:52.732904   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:52.724948   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.725610   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727167   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727671   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.729366   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:52.724948   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.725610   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727167   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727671   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.729366   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:52.732929   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:52.732943   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:52.771098   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:52.771129   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:52.846025   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:52.846063   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:52.888075   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:52.888104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:52.992414   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:52.992452   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:53.007058   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:53.007089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:53.034812   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:53.034841   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:53.078790   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:53.078828   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:53.134673   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:53.134708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:53.162943   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:53.162969   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:55.740743   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:55.751731   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:55.751816   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:55.779888   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:55.779908   92925 cri.go:89] found id: ""
	I1213 19:14:55.779916   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:55.779976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.783761   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:55.783831   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:55.810156   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:55.810175   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:55.810185   92925 cri.go:89] found id: ""
	I1213 19:14:55.810192   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:55.810252   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.814013   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.817577   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:55.817649   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:55.843468   92925 cri.go:89] found id: ""
	I1213 19:14:55.843491   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.843499   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:55.843505   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:55.843561   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:55.870048   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:55.870081   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:55.870093   92925 cri.go:89] found id: ""
	I1213 19:14:55.870100   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:55.870158   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.874026   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.877764   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:55.877852   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:55.907873   92925 cri.go:89] found id: ""
	I1213 19:14:55.907900   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.907909   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:55.907915   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:55.907976   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:55.934710   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:55.934732   92925 cri.go:89] found id: ""
	I1213 19:14:55.934740   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:55.934795   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.938598   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:55.938671   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:55.968271   92925 cri.go:89] found id: ""
	I1213 19:14:55.968337   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.968361   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:55.968387   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:55.968416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:56.002213   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:56.002285   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:56.029658   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:56.029741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:56.125956   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:56.126039   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:56.139465   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:56.139492   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:56.191699   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:56.191735   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:56.278131   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:56.278179   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:56.314251   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:56.314283   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:56.383224   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:56.373948   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.374799   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.376672   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.377083   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.378823   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:56.373948   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.374799   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.376672   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.377083   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.378823   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:56.383248   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:56.383261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:56.410961   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:56.410990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:56.450595   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:56.450633   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.032642   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:59.043619   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:59.043712   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:59.070836   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:59.070859   92925 cri.go:89] found id: ""
	I1213 19:14:59.070867   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:59.070934   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.074933   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:59.075009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:59.112290   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:59.112313   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:59.112318   92925 cri.go:89] found id: ""
	I1213 19:14:59.112325   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:59.112380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.117374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.121073   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:59.121166   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:59.159645   92925 cri.go:89] found id: ""
	I1213 19:14:59.159714   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.159741   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:59.159763   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:59.159838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:59.193406   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.193430   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:59.193435   92925 cri.go:89] found id: ""
	I1213 19:14:59.193443   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:59.193524   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.197329   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.201001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:59.201109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:59.227682   92925 cri.go:89] found id: ""
	I1213 19:14:59.227706   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.227715   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:59.227721   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:59.227784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:59.254466   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:59.254497   92925 cri.go:89] found id: ""
	I1213 19:14:59.254505   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:59.254561   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.258458   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:59.258530   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:59.285792   92925 cri.go:89] found id: ""
	I1213 19:14:59.285817   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.285826   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:59.285835   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:59.285851   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:59.312955   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:59.312990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:59.394158   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:59.394195   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:59.439055   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:59.439084   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:59.452200   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:59.452253   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:59.543624   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:59.535183   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.536016   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.537681   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.538269   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.539987   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:59.535183   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.536016   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.537681   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.538269   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.539987   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:59.543645   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:59.543659   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:59.571506   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:59.571533   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:59.615595   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:59.615634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:59.717216   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:59.717256   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:59.764205   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:59.764243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.840500   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:59.840538   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.367252   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:02.379179   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:02.379252   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:02.407368   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:02.407394   92925 cri.go:89] found id: ""
	I1213 19:15:02.407402   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:02.407464   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.411245   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:02.411321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:02.439707   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:02.439727   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:02.439732   92925 cri.go:89] found id: ""
	I1213 19:15:02.439739   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:02.439793   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.443520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.447838   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:02.447965   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:02.475049   92925 cri.go:89] found id: ""
	I1213 19:15:02.475077   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.475086   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:02.475093   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:02.475153   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:02.509558   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:02.509582   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.509587   92925 cri.go:89] found id: ""
	I1213 19:15:02.509595   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:02.509652   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.513964   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.519816   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:02.519888   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:02.549572   92925 cri.go:89] found id: ""
	I1213 19:15:02.549639   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.549653   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:02.549660   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:02.549720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:02.578189   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:02.578215   92925 cri.go:89] found id: ""
	I1213 19:15:02.578224   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:02.578287   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.582094   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:02.582166   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:02.609748   92925 cri.go:89] found id: ""
	I1213 19:15:02.609774   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.609783   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:02.609792   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:02.609823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:02.660274   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:02.660313   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:02.737557   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:02.737590   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:02.821155   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:02.821193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:02.853468   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:02.853501   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:02.866631   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:02.866661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:02.895294   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:02.895323   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:02.940697   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:02.940734   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.970055   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:02.970088   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:03.002379   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:03.002409   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:03.096355   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:03.096390   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:03.189863   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:03.181408   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.182165   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.183899   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.184754   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.186389   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:03.181408   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.182165   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.183899   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.184754   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.186389   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:05.690514   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:05.702677   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:05.702772   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:05.730136   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:05.730160   92925 cri.go:89] found id: ""
	I1213 19:15:05.730169   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:05.730226   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.733966   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:05.734047   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:05.761337   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:05.761404   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:05.761425   92925 cri.go:89] found id: ""
	I1213 19:15:05.761450   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:05.761534   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.766511   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.770470   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:05.770545   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:05.803220   92925 cri.go:89] found id: ""
	I1213 19:15:05.803284   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.803300   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:05.803306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:05.803383   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:05.831772   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:05.831797   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:05.831803   92925 cri.go:89] found id: ""
	I1213 19:15:05.831810   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:05.831869   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.835814   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.839281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:05.839351   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:05.870011   92925 cri.go:89] found id: ""
	I1213 19:15:05.870038   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.870059   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:05.870065   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:05.870126   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:05.898850   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:05.898877   92925 cri.go:89] found id: ""
	I1213 19:15:05.898888   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:05.898943   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.903063   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:05.903177   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:05.930061   92925 cri.go:89] found id: ""
	I1213 19:15:05.930126   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.930140   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:05.930150   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:05.930164   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:05.943518   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:05.943549   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:05.973699   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:05.973729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:06.024591   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:06.024622   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:06.131997   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:06.132041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:06.202110   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:06.193932   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.195174   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.196901   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.197593   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.198598   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:06.193932   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.195174   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.196901   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.197593   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.198598   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:06.202133   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:06.202145   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:06.241491   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:06.241525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:06.289002   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:06.289076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:06.376385   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:06.376422   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:06.406893   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:06.406920   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:06.438586   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:06.438615   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:09.021141   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:09.032497   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:09.032597   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:09.061840   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:09.061871   92925 cri.go:89] found id: ""
	I1213 19:15:09.061881   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:09.061939   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.065632   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:09.065706   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:09.094419   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:09.094444   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:09.094449   92925 cri.go:89] found id: ""
	I1213 19:15:09.094456   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:09.094517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.098305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.108354   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:09.108432   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:09.137672   92925 cri.go:89] found id: ""
	I1213 19:15:09.137706   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.137716   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:09.137722   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:09.137785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:09.170831   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:09.170854   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:09.170859   92925 cri.go:89] found id: ""
	I1213 19:15:09.170866   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:09.170929   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.174672   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.177949   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:09.178023   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:09.208255   92925 cri.go:89] found id: ""
	I1213 19:15:09.208282   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.208291   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:09.208297   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:09.208352   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:09.234350   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:09.234373   92925 cri.go:89] found id: ""
	I1213 19:15:09.234381   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:09.234453   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.238030   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:09.238102   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:09.264310   92925 cri.go:89] found id: ""
	I1213 19:15:09.264335   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.264344   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:09.264352   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:09.264365   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:09.295245   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:09.295276   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:09.369835   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:09.369869   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:09.472350   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:09.472384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:09.500555   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:09.500589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:09.535996   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:09.536032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:09.552067   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:09.552096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:09.624766   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:09.616285   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.617238   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.618950   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.619348   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.620912   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:09.616285   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.617238   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.618950   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.619348   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.620912   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:09.624810   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:09.624823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:09.654769   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:09.654796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:09.695636   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:09.695711   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:09.740840   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:09.740873   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.330150   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:12.341327   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:12.341430   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:12.373666   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:12.373692   92925 cri.go:89] found id: ""
	I1213 19:15:12.373699   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:12.373760   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.377493   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:12.377563   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:12.407860   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:12.407882   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:12.407886   92925 cri.go:89] found id: ""
	I1213 19:15:12.407897   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:12.407965   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.411939   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.416613   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:12.416687   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:12.447044   92925 cri.go:89] found id: ""
	I1213 19:15:12.447071   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.447080   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:12.447086   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:12.447149   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:12.474565   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.474599   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:12.474604   92925 cri.go:89] found id: ""
	I1213 19:15:12.474612   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:12.474669   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.478501   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.482327   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:12.482425   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:12.519207   92925 cri.go:89] found id: ""
	I1213 19:15:12.519235   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.519245   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:12.519252   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:12.519330   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:12.548236   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:12.548259   92925 cri.go:89] found id: ""
	I1213 19:15:12.548269   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:12.548334   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.552167   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:12.552292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:12.581061   92925 cri.go:89] found id: ""
	I1213 19:15:12.581086   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.581094   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:12.581103   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:12.581115   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:12.626762   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:12.626795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:12.676771   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:12.676803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:12.708623   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:12.708661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:12.735332   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:12.735361   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:12.830566   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:12.830606   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:12.858035   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:12.858107   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.953406   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:12.953445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:13.037585   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:13.037626   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:13.070076   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:13.070108   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:13.083239   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:13.083266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:13.171369   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:13.163050   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.163831   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.165471   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.166105   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.167624   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:13.163050   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.163831   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.165471   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.166105   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.167624   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:15.672265   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:15.683518   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:15.683589   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:15.713736   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:15.713764   92925 cri.go:89] found id: ""
	I1213 19:15:15.713773   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:15.713845   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.718041   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:15.718116   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:15.745439   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:15.745462   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:15.745467   92925 cri.go:89] found id: ""
	I1213 19:15:15.745475   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:15.745555   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.749679   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.753271   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:15.753343   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:15.780766   92925 cri.go:89] found id: ""
	I1213 19:15:15.780791   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.780800   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:15.780806   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:15.780867   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:15.809433   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:15.809453   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:15.809458   92925 cri.go:89] found id: ""
	I1213 19:15:15.809466   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:15.809521   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.813350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.816829   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:15.816899   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:15.843466   92925 cri.go:89] found id: ""
	I1213 19:15:15.843491   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.843501   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:15.843507   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:15.843566   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:15.869979   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:15.870003   92925 cri.go:89] found id: ""
	I1213 19:15:15.870012   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:15.870069   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.873941   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:15.874036   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:15.906204   92925 cri.go:89] found id: ""
	I1213 19:15:15.906268   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.906283   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:15.906293   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:15.906305   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:16.002221   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:16.002261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:16.030993   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:16.031024   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:16.078933   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:16.078967   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:16.173955   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:16.174010   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:16.207960   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:16.207989   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:16.221095   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:16.221124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:16.290865   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:16.280288   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.281366   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.282142   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.283740   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.284314   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:16.280288   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.281366   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.282142   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.283740   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.284314   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:16.290940   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:16.290969   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:16.330431   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:16.330462   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:16.403747   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:16.403785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:16.435000   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:16.435076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:18.967118   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:18.978473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:18.978548   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:19.009416   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:19.009442   92925 cri.go:89] found id: ""
	I1213 19:15:19.009450   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:19.009506   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.013229   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:19.013304   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:19.046195   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:19.046217   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:19.046221   92925 cri.go:89] found id: ""
	I1213 19:15:19.046228   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:19.046284   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.050380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.055287   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:19.055364   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:19.084697   92925 cri.go:89] found id: ""
	I1213 19:15:19.084724   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.084734   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:19.084740   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:19.084799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:19.134188   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:19.134212   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:19.134217   92925 cri.go:89] found id: ""
	I1213 19:15:19.134225   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:19.134281   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.139452   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.143380   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:19.143515   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:19.176707   92925 cri.go:89] found id: ""
	I1213 19:15:19.176733   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.176742   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:19.176748   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:19.176808   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:19.205658   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:19.205681   92925 cri.go:89] found id: ""
	I1213 19:15:19.205689   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:19.205769   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.209480   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:19.209556   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:19.236187   92925 cri.go:89] found id: ""
	I1213 19:15:19.236210   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.236219   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:19.236227   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:19.236239   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:19.335347   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:19.335384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:19.347594   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:19.347622   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:19.423749   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:19.415662   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.416536   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418222   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418572   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.420106   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:19.415662   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.416536   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418222   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418572   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.420106   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:19.423773   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:19.423785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:19.458293   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:19.458322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:19.491891   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:19.491981   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:19.532203   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:19.532289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:19.572383   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:19.572416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:19.623843   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:19.623878   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:19.701590   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:19.701669   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:19.730646   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:19.730674   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:22.313136   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:22.324070   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:22.324192   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:22.354911   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:22.354936   92925 cri.go:89] found id: ""
	I1213 19:15:22.354944   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:22.355017   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.359138   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:22.359232   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:22.387533   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:22.387553   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:22.387559   92925 cri.go:89] found id: ""
	I1213 19:15:22.387567   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:22.387622   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.391451   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.395283   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:22.395396   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:22.424307   92925 cri.go:89] found id: ""
	I1213 19:15:22.424330   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.424338   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:22.424345   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:22.424406   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:22.453085   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:22.453146   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:22.453167   92925 cri.go:89] found id: ""
	I1213 19:15:22.453192   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:22.453265   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.457420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.461164   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:22.461238   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:22.491907   92925 cri.go:89] found id: ""
	I1213 19:15:22.491930   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.491939   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:22.491944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:22.492029   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:22.527521   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:22.527588   92925 cri.go:89] found id: ""
	I1213 19:15:22.527615   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:22.527710   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.531946   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:22.532027   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:22.559453   92925 cri.go:89] found id: ""
	I1213 19:15:22.559480   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.559499   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:22.559510   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:22.559522   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:22.601772   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:22.601808   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:22.649158   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:22.649193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:22.676639   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:22.676667   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:22.777850   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:22.777888   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:22.851444   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:22.842501   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.843358   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.845491   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.846536   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.847439   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:22.842501   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.843358   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.845491   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.846536   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.847439   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:22.851468   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:22.851480   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:22.933320   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:22.933358   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:22.962559   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:22.962589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:23.059725   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:23.059803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:23.109255   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:23.109286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:23.122814   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:23.122844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:25.651780   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:25.662957   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:25.663032   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:25.696971   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:25.696993   92925 cri.go:89] found id: ""
	I1213 19:15:25.697001   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:25.697087   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.701838   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:25.701919   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:25.738295   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:25.738373   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:25.738386   92925 cri.go:89] found id: ""
	I1213 19:15:25.738395   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:25.738459   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.742364   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.746297   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:25.746400   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:25.772105   92925 cri.go:89] found id: ""
	I1213 19:15:25.772178   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.772201   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:25.772221   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:25.772305   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:25.799458   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:25.799526   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:25.799546   92925 cri.go:89] found id: ""
	I1213 19:15:25.799570   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:25.799645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.803647   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.807583   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:25.807695   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:25.834975   92925 cri.go:89] found id: ""
	I1213 19:15:25.835051   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.835066   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:25.835073   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:25.835133   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:25.864722   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:25.864769   92925 cri.go:89] found id: ""
	I1213 19:15:25.864778   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:25.864836   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.868764   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:25.868838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:25.897111   92925 cri.go:89] found id: ""
	I1213 19:15:25.897133   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.897141   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:25.897162   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:25.897174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:26.007072   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:26.007104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:26.025166   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:26.025201   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:26.111354   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:26.097401   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.097781   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105030   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105458   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.107065   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:26.097401   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.097781   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105030   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105458   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.107065   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:26.111374   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:26.111387   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:26.141476   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:26.141507   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:26.169374   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:26.169404   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:26.246093   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:26.246133   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:26.297802   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:26.297829   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:26.325154   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:26.325182   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:26.368489   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:26.368524   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:26.414072   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:26.414110   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.001164   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:29.013204   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:29.013272   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:29.047888   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:29.047909   92925 cri.go:89] found id: ""
	I1213 19:15:29.047918   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:29.047982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.051890   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:29.051971   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:29.077464   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:29.077486   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:29.077490   92925 cri.go:89] found id: ""
	I1213 19:15:29.077498   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:29.077553   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.081462   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.084988   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:29.085157   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:29.115595   92925 cri.go:89] found id: ""
	I1213 19:15:29.115621   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.115631   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:29.115637   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:29.115697   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:29.160656   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.160729   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:29.160748   92925 cri.go:89] found id: ""
	I1213 19:15:29.160772   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:29.160853   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.165160   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.168775   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:29.168891   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:29.199867   92925 cri.go:89] found id: ""
	I1213 19:15:29.199890   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.199899   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:29.199911   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:29.200009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:29.226478   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:29.226502   92925 cri.go:89] found id: ""
	I1213 19:15:29.226511   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:29.226565   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.230306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:29.230382   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:29.260973   92925 cri.go:89] found id: ""
	I1213 19:15:29.260999   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.261034   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:29.261044   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:29.261060   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:29.288533   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:29.288560   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:29.317072   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:29.317145   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:29.343899   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:29.343926   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:29.424466   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:29.424502   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:29.437265   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:29.437314   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:29.525751   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:29.505457   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.506350   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.518441   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.520261   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.521214   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:29.505457   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.506350   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.518441   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.520261   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.521214   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:29.525774   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:29.525787   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:29.565912   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:29.565947   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:29.614921   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:29.614962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.695191   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:29.695229   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:29.726876   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:29.726907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:32.331342   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:32.342123   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:32.342193   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:32.377492   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:32.377512   92925 cri.go:89] found id: ""
	I1213 19:15:32.377520   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:32.377603   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.381461   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:32.381535   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:32.408828   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:32.408849   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:32.408853   92925 cri.go:89] found id: ""
	I1213 19:15:32.408861   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:32.408913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.412666   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.416683   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:32.416757   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:32.444710   92925 cri.go:89] found id: ""
	I1213 19:15:32.444734   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.444744   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:32.444750   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:32.444842   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:32.470813   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:32.470834   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:32.470839   92925 cri.go:89] found id: ""
	I1213 19:15:32.470846   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:32.470904   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.474746   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.478110   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:32.478180   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:32.505590   92925 cri.go:89] found id: ""
	I1213 19:15:32.505616   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.505625   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:32.505630   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:32.505685   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:32.534851   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:32.534873   92925 cri.go:89] found id: ""
	I1213 19:15:32.534882   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:32.534942   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.538913   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:32.539005   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:32.570980   92925 cri.go:89] found id: ""
	I1213 19:15:32.571020   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.571029   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:32.571055   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:32.571075   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:32.672697   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:32.672739   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:32.685325   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:32.685360   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:32.762805   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:32.754695   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.755445   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.756898   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.757344   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.759247   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:32.754695   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.755445   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.756898   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.757344   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.759247   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:32.762877   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:32.762899   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:32.788216   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:32.788243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:32.831764   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:32.831797   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:32.861451   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:32.861481   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:32.889040   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:32.889113   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:32.962682   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:32.962721   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:33.005926   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:33.005963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:33.113066   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:33.113100   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:35.646466   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:35.657328   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:35.657400   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:35.682772   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:35.682796   92925 cri.go:89] found id: ""
	I1213 19:15:35.682805   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:35.682862   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.686943   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:35.687017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:35.713394   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:35.713426   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:35.713433   92925 cri.go:89] found id: ""
	I1213 19:15:35.713440   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:35.713492   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.717236   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.720957   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:35.721060   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:35.747062   92925 cri.go:89] found id: ""
	I1213 19:15:35.747139   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.747155   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:35.747162   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:35.747223   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:35.780788   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:35.780809   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:35.780814   92925 cri.go:89] found id: ""
	I1213 19:15:35.780822   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:35.780877   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.784913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.788950   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:35.789084   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:35.817183   92925 cri.go:89] found id: ""
	I1213 19:15:35.817206   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.817217   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:35.817223   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:35.817285   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:35.844649   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:35.844674   92925 cri.go:89] found id: ""
	I1213 19:15:35.844682   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:35.844741   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.848694   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:35.848772   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:35.880264   92925 cri.go:89] found id: ""
	I1213 19:15:35.880293   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.880302   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:35.880311   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:35.880323   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:35.928133   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:35.928168   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:36.005056   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:36.005095   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:36.088199   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:36.088234   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:36.195615   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:36.195657   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:36.222570   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:36.222597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:36.253158   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:36.253189   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:36.282294   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:36.282324   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:36.315027   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:36.315057   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:36.327415   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:36.327445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:36.397770   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:36.388485   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.389249   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.391121   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392189   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392759   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:36.388485   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.389249   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.391121   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392189   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392759   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:36.397793   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:36.397809   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:38.950291   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:38.966129   92925 out.go:203] 
	W1213 19:15:38.969186   92925 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 19:15:38.969230   92925 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 19:15:38.969244   92925 out.go:285] * Related issues:
	* Related issues:
	W1213 19:15:38.969256   92925 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1213 19:15:38.969271   92925 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1213 19:15:38.972406   92925 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-arm64 -p ha-605114 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-605114
helpers_test.go:244: (dbg) docker inspect ha-605114:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01",
	        "Created": "2025-12-13T18:58:54.586877202Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 93050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T19:07:47.614428932Z",
	            "FinishedAt": "2025-12-13T19:07:46.864889381Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/hosts",
	        "LogPath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01-json.log",
	        "Name": "/ha-605114",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-605114:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-605114",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01",
	                "LowerDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-605114",
	                "Source": "/var/lib/docker/volumes/ha-605114/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-605114",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-605114",
	                "name.minikube.sigs.k8s.io": "ha-605114",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c9ba4aac7e27f5373688f6fc1a7a905972eca17b43555a3811eba451288f742",
	            "SandboxKey": "/var/run/docker/netns/7c9ba4aac7e2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32833"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32834"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32836"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-605114": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:0b:16:d7:dc:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a2f3617b1da5e979c171e0e32faeb143b6ffd1484ed485ce26cb0c66c2f2f8d4",
	                    "EndpointID": "ad19576bfc7fdb2d25ff186edf415bfaa77021d19f2378c0078a6b8dd2c2a121",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-605114",
	                        "b8b77eca4604"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-605114 -n ha-605114
helpers_test.go:253: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 logs -n 25: (2.230520608s)
helpers_test.go:261: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-605114 cp ha-605114-m03:/home/docker/cp-test.txt ha-605114-m04:/home/docker/cp-test_ha-605114-m03_ha-605114-m04.txt               │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test_ha-605114-m03_ha-605114-m04.txt                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp testdata/cp-test.txt ha-605114-m04:/home/docker/cp-test.txt                                                             │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1407969839/001/cp-test_ha-605114-m04.txt │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114:/home/docker/cp-test_ha-605114-m04_ha-605114.txt                       │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114 sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114.txt                                                 │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114-m02:/home/docker/cp-test_ha-605114-m04_ha-605114-m02.txt               │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m02 sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114-m02.txt                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114-m03:/home/docker/cp-test_ha-605114-m04_ha-605114-m03.txt               │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m03 sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114-m03.txt                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ node    │ ha-605114 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ node    │ ha-605114 node start m02 --alsologtostderr -v 5                                                                                      │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:04 UTC │
	│ node    │ ha-605114 node list --alsologtostderr -v 5                                                                                           │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:04 UTC │                     │
	│ stop    │ ha-605114 stop --alsologtostderr -v 5                                                                                                │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:04 UTC │ 13 Dec 25 19:05 UTC │
	│ start   │ ha-605114 start --wait true --alsologtostderr -v 5                                                                                   │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:05 UTC │ 13 Dec 25 19:06 UTC │
	│ node    │ ha-605114 node list --alsologtostderr -v 5                                                                                           │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:06 UTC │                     │
	│ node    │ ha-605114 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:06 UTC │ 13 Dec 25 19:07 UTC │
	│ stop    │ ha-605114 stop --alsologtostderr -v 5                                                                                                │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:07 UTC │ 13 Dec 25 19:07 UTC │
	│ start   │ ha-605114 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 19:07:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:07:47.349427   92925 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:07:47.349751   92925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:07:47.349782   92925 out.go:374] Setting ErrFile to fd 2...
	I1213 19:07:47.349805   92925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:07:47.350088   92925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:07:47.350503   92925 out.go:368] Setting JSON to false
	I1213 19:07:47.351372   92925 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6620,"bootTime":1765646248,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 19:07:47.351472   92925 start.go:143] virtualization:  
	I1213 19:07:47.357175   92925 out.go:179] * [ha-605114] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 19:07:47.360285   92925 notify.go:221] Checking for updates...
	I1213 19:07:47.363188   92925 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 19:07:47.366066   92925 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:07:47.368997   92925 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:47.371939   92925 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 19:07:47.374564   92925 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 19:07:47.377424   92925 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:07:47.380815   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:47.381472   92925 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 19:07:47.411852   92925 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 19:07:47.411970   92925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:07:47.470115   92925 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-13 19:07:47.460445366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:07:47.470224   92925 docker.go:319] overlay module found
	I1213 19:07:47.473192   92925 out.go:179] * Using the docker driver based on existing profile
	I1213 19:07:47.475964   92925 start.go:309] selected driver: docker
	I1213 19:07:47.475980   92925 start.go:927] validating driver "docker" against &{Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:47.476125   92925 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:07:47.476235   92925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:07:47.532110   92925 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-13 19:07:47.522555398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:07:47.532550   92925 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:07:47.532582   92925 cni.go:84] Creating CNI manager for ""
	I1213 19:07:47.532636   92925 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1213 19:07:47.532689   92925 start.go:353] cluster config:
	{Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:47.537457   92925 out.go:179] * Starting "ha-605114" primary control-plane node in "ha-605114" cluster
	I1213 19:07:47.540151   92925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:07:47.542975   92925 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:07:47.545679   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:47.545731   92925 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 19:07:47.545743   92925 cache.go:65] Caching tarball of preloaded images
	I1213 19:07:47.545753   92925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:07:47.545828   92925 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:07:47.545838   92925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 19:07:47.545971   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:47.565319   92925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:07:47.565343   92925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:07:47.565364   92925 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:07:47.565392   92925 start.go:360] acquireMachinesLock for ha-605114: {Name:mk8d2cbed975abcdd5664438df80622381a361a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:07:47.565456   92925 start.go:364] duration metric: took 41.903µs to acquireMachinesLock for "ha-605114"
	I1213 19:07:47.565477   92925 start.go:96] Skipping create...Using existing machine configuration
	I1213 19:07:47.565483   92925 fix.go:54] fixHost starting: 
	I1213 19:07:47.565741   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:07:47.581688   92925 fix.go:112] recreateIfNeeded on ha-605114: state=Stopped err=<nil>
	W1213 19:07:47.581717   92925 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 19:07:47.584947   92925 out.go:252] * Restarting existing docker container for "ha-605114" ...
	I1213 19:07:47.585046   92925 cli_runner.go:164] Run: docker start ha-605114
	I1213 19:07:47.865372   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:07:47.883933   92925 kic.go:430] container "ha-605114" state is running.
	I1213 19:07:47.884352   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:47.906511   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:47.906746   92925 machine.go:94] provisionDockerMachine start ...
	I1213 19:07:47.906805   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:47.930498   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:47.930829   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:47.930842   92925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:07:47.931376   92925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46728->127.0.0.1:32833: read: connection reset by peer
	I1213 19:07:51.084950   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114
	
	I1213 19:07:51.084978   92925 ubuntu.go:182] provisioning hostname "ha-605114"
	I1213 19:07:51.085064   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.103183   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.103509   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.103523   92925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-605114 && echo "ha-605114" | sudo tee /etc/hostname
	I1213 19:07:51.262962   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114
	
	I1213 19:07:51.263080   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.281758   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.282067   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.282093   92925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-605114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-605114/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-605114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:07:51.433225   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:07:51.433251   92925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:07:51.433276   92925 ubuntu.go:190] setting up certificates
	I1213 19:07:51.433294   92925 provision.go:84] configureAuth start
	I1213 19:07:51.433356   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:51.451056   92925 provision.go:143] copyHostCerts
	I1213 19:07:51.451109   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:51.451157   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:07:51.451169   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:51.451244   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:07:51.451330   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:51.451351   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:07:51.451359   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:51.451387   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:07:51.451438   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:51.451459   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:07:51.451473   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:51.451505   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:07:51.451557   92925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.ha-605114 san=[127.0.0.1 192.168.49.2 ha-605114 localhost minikube]
	I1213 19:07:51.562646   92925 provision.go:177] copyRemoteCerts
	I1213 19:07:51.562709   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:07:51.562753   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.579816   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:51.684734   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 19:07:51.684815   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:07:51.703545   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 19:07:51.703625   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1213 19:07:51.721319   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 19:07:51.721382   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 19:07:51.738806   92925 provision.go:87] duration metric: took 305.496623ms to configureAuth
	I1213 19:07:51.738832   92925 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:07:51.739059   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:51.739152   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.756183   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.756478   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.756493   92925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:07:52.176419   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:07:52.176439   92925 machine.go:97] duration metric: took 4.269683244s to provisionDockerMachine
	I1213 19:07:52.176449   92925 start.go:293] postStartSetup for "ha-605114" (driver="docker")
	I1213 19:07:52.176460   92925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:07:52.176518   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:07:52.176563   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.201857   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.305092   92925 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:07:52.308224   92925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:07:52.308251   92925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:07:52.308263   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:07:52.308316   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:07:52.308413   92925 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:07:52.308423   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 19:07:52.308523   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:07:52.315982   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:07:52.333023   92925 start.go:296] duration metric: took 156.543018ms for postStartSetup
	I1213 19:07:52.333100   92925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:07:52.333150   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.353818   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.454237   92925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:07:52.459167   92925 fix.go:56] duration metric: took 4.893676995s for fixHost
	I1213 19:07:52.459203   92925 start.go:83] releasing machines lock for "ha-605114", held for 4.893726932s
	I1213 19:07:52.459271   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:52.475811   92925 ssh_runner.go:195] Run: cat /version.json
	I1213 19:07:52.475832   92925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:07:52.475868   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.475886   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.494277   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.499565   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.694122   92925 ssh_runner.go:195] Run: systemctl --version
	I1213 19:07:52.700676   92925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:07:52.737939   92925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:07:52.742564   92925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:07:52.742632   92925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:07:52.750413   92925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 19:07:52.750438   92925 start.go:496] detecting cgroup driver to use...
	I1213 19:07:52.750469   92925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:07:52.750516   92925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:07:52.765290   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:07:52.779600   92925 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:07:52.779718   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:07:52.795802   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:07:52.809441   92925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:07:52.921383   92925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:07:53.050247   92925 docker.go:234] disabling docker service ...
	I1213 19:07:53.050357   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:07:53.065412   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:07:53.078985   92925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:07:53.197041   92925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:07:53.312016   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:07:53.324873   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:07:53.338465   92925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:07:53.338566   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.348165   92925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:07:53.348244   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.357334   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.366113   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.375030   92925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:07:53.383092   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.392159   92925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.400500   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.409475   92925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:07:53.416937   92925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:07:53.424427   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:07:53.551020   92925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:07:53.724377   92925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:07:53.724453   92925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:07:53.728412   92925 start.go:564] Will wait 60s for crictl version
	I1213 19:07:53.728528   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:07:53.732393   92925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:07:53.759934   92925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:07:53.760022   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:07:53.792422   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:07:53.826233   92925 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 19:07:53.829188   92925 cli_runner.go:164] Run: docker network inspect ha-605114 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:07:53.845641   92925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:07:53.849708   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:07:53.860398   92925 kubeadm.go:884] updating cluster {Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:07:53.860545   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:53.860602   92925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:07:53.896899   92925 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:07:53.896925   92925 crio.go:433] Images already preloaded, skipping extraction
	I1213 19:07:53.896980   92925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:07:53.927660   92925 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:07:53.927686   92925 cache_images.go:86] Images are preloaded, skipping loading
	I1213 19:07:53.927694   92925 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 19:07:53.927835   92925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-605114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:07:53.927943   92925 ssh_runner.go:195] Run: crio config
	I1213 19:07:53.983293   92925 cni.go:84] Creating CNI manager for ""
	I1213 19:07:53.983320   92925 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1213 19:07:53.983344   92925 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 19:07:53.983367   92925 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-605114 NodeName:ha-605114 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:07:53.983512   92925 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-605114"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:07:53.983533   92925 kube-vip.go:115] generating kube-vip config ...
	I1213 19:07:53.983586   92925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1213 19:07:53.998146   92925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:07:53.998359   92925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 19:07:53.998456   92925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 19:07:54.007466   92925 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:07:54.007601   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1213 19:07:54.016257   92925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1213 19:07:54.030166   92925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:07:54.043943   92925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1213 19:07:54.057568   92925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1213 19:07:54.070913   92925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1213 19:07:54.074912   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:07:54.085321   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:07:54.204815   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:07:54.219656   92925 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114 for IP: 192.168.49.2
	I1213 19:07:54.219678   92925 certs.go:195] generating shared ca certs ...
	I1213 19:07:54.219703   92925 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.219837   92925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:07:54.219890   92925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:07:54.219904   92925 certs.go:257] generating profile certs ...
	I1213 19:07:54.219983   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key
	I1213 19:07:54.220016   92925 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc
	I1213 19:07:54.220035   92925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1213 19:07:54.524208   92925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc ...
	I1213 19:07:54.524279   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc: {Name:mk2a78acb3455aba2154553b94cc00acb06ef2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.524506   92925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc ...
	I1213 19:07:54.524551   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc: {Name:mk04e3ed8a0db9ab16dbffd5c3b9073d491094e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.524690   92925 certs.go:382] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt
	I1213 19:07:54.524872   92925 certs.go:386] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key
	I1213 19:07:54.525075   92925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key
	I1213 19:07:54.525118   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 19:07:54.525152   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 19:07:54.525194   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 19:07:54.525228   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 19:07:54.525260   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 19:07:54.525307   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 19:07:54.525343   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 19:07:54.525371   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 19:07:54.525461   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:07:54.525519   92925 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:07:54.525567   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:07:54.525619   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:07:54.525684   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:07:54.525769   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:07:54.525903   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:07:54.525966   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.526009   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.526041   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.526676   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:07:54.547219   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:07:54.566530   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:07:54.584290   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:07:54.601920   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 19:07:54.619619   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:07:54.637359   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:07:54.654838   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:07:54.674423   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:07:54.692475   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:07:54.711269   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:07:54.730584   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:07:54.744548   92925 ssh_runner.go:195] Run: openssl version
	I1213 19:07:54.750950   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.759097   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:07:54.766678   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.770469   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.770573   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.811925   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:07:54.820248   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.829596   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:07:54.843944   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.848466   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.848527   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.910394   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:07:54.922018   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.934942   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:07:54.943147   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.953686   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.953799   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:07:55.020871   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:07:55.034570   92925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:07:55.045312   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 19:07:55.146347   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 19:07:55.197938   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 19:07:55.240888   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 19:07:55.293579   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 19:07:55.349397   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 19:07:55.405749   92925 kubeadm.go:401] StartCluster: {Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:55.405941   92925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:07:55.406039   92925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:07:55.476432   92925 cri.go:89] found id: "23b44f60db0dc9ad888430163cce4adc2cef45e4fff10aded1fd37e36e5d5955"
	I1213 19:07:55.476492   92925 cri.go:89] found id: "9a81ddd488bb7e9ca9d20cc8af4e9414463f3bf2bd40edd26c2e9395f731a3ec"
	I1213 19:07:55.476519   92925 cri.go:89] found id: "ee202abc8dba3b97ac56d7c3063ce4fae0734134ba47b9d6070588c897f7baf0"
	I1213 19:07:55.476536   92925 cri.go:89] found id: "3c729bb1538bfb45bc9b5542f5524916c96b118344d2be8a42e58a0bc6d4cb0d"
	I1213 19:07:55.476570   92925 cri.go:89] found id: "2b3744a5aa7a90a9d9036f0de528d8ed7e951f80254fa43fd57f666e0a6ccc86"
	I1213 19:07:55.476591   92925 cri.go:89] found id: ""
	I1213 19:07:55.476674   92925 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 19:07:55.502827   92925 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T19:07:55Z" level=error msg="open /run/runc: no such file or directory"
	I1213 19:07:55.502965   92925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:07:55.514772   92925 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 19:07:55.514841   92925 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 19:07:55.514932   92925 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 19:07:55.530907   92925 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:07:55.531414   92925 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-605114" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:55.531569   92925 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-2686/kubeconfig needs updating (will repair): [kubeconfig missing "ha-605114" cluster setting kubeconfig missing "ha-605114" context setting]
	I1213 19:07:55.531908   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.532529   92925 kapi.go:59] client config for ha-605114: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 19:07:55.533545   92925 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 19:07:55.533623   92925 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 19:07:55.533709   92925 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 19:07:55.533743   92925 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 19:07:55.533762   92925 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 19:07:55.533784   92925 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 19:07:55.534156   92925 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 19:07:55.550155   92925 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 19:07:55.550227   92925 kubeadm.go:602] duration metric: took 35.349185ms to restartPrimaryControlPlane
	I1213 19:07:55.550251   92925 kubeadm.go:403] duration metric: took 144.511847ms to StartCluster
	I1213 19:07:55.550281   92925 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.550405   92925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:55.551146   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.551412   92925 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:07:55.551467   92925 start.go:242] waiting for startup goroutines ...
	I1213 19:07:55.551494   92925 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 19:07:55.552092   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:55.557393   92925 out.go:179] * Enabled addons: 
	I1213 19:07:55.560282   92925 addons.go:530] duration metric: took 8.786078ms for enable addons: enabled=[]
	I1213 19:07:55.560370   92925 start.go:247] waiting for cluster config update ...
	I1213 19:07:55.560416   92925 start.go:256] writing updated cluster config ...
	I1213 19:07:55.563604   92925 out.go:203] 
	I1213 19:07:55.566673   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:55.566871   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:55.570151   92925 out.go:179] * Starting "ha-605114-m02" control-plane node in "ha-605114" cluster
	I1213 19:07:55.572987   92925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:07:55.575841   92925 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:07:55.578800   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:55.578823   92925 cache.go:65] Caching tarball of preloaded images
	I1213 19:07:55.578933   92925 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:07:55.578943   92925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 19:07:55.579063   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:55.579269   92925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:07:55.599207   92925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:07:55.599233   92925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:07:55.599247   92925 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:07:55.599269   92925 start.go:360] acquireMachinesLock for ha-605114-m02: {Name:mk43db0c2b2ac44e0e8dc9a68aa6922f0bb2fccb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:07:55.599325   92925 start.go:364] duration metric: took 36.989µs to acquireMachinesLock for "ha-605114-m02"
	I1213 19:07:55.599348   92925 start.go:96] Skipping create...Using existing machine configuration
	I1213 19:07:55.599358   92925 fix.go:54] fixHost starting: m02
	I1213 19:07:55.599613   92925 cli_runner.go:164] Run: docker container inspect ha-605114-m02 --format={{.State.Status}}
	I1213 19:07:55.630999   92925 fix.go:112] recreateIfNeeded on ha-605114-m02: state=Stopped err=<nil>
	W1213 19:07:55.631030   92925 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 19:07:55.634239   92925 out.go:252] * Restarting existing docker container for "ha-605114-m02" ...
	I1213 19:07:55.634323   92925 cli_runner.go:164] Run: docker start ha-605114-m02
	I1213 19:07:56.013613   92925 cli_runner.go:164] Run: docker container inspect ha-605114-m02 --format={{.State.Status}}
	I1213 19:07:56.043229   92925 kic.go:430] container "ha-605114-m02" state is running.
	I1213 19:07:56.043952   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:07:56.072863   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:56.073198   92925 machine.go:94] provisionDockerMachine start ...
	I1213 19:07:56.073260   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:56.107315   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:56.107694   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:56.107711   92925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:07:56.108441   92925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 19:07:59.320519   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114-m02
	
	I1213 19:07:59.320540   92925 ubuntu.go:182] provisioning hostname "ha-605114-m02"
	I1213 19:07:59.320600   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.354148   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:59.354465   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:59.354476   92925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-605114-m02 && echo "ha-605114-m02" | sudo tee /etc/hostname
	I1213 19:07:59.560753   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114-m02
	
	I1213 19:07:59.560835   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.590681   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:59.590982   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:59.590997   92925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-605114-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-605114-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-605114-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:07:59.777428   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:07:59.777502   92925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:07:59.777532   92925 ubuntu.go:190] setting up certificates
	I1213 19:07:59.777573   92925 provision.go:84] configureAuth start
	I1213 19:07:59.777669   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:07:59.806547   92925 provision.go:143] copyHostCerts
	I1213 19:07:59.806589   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:59.806621   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:07:59.806628   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:59.806709   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:07:59.806788   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:59.806805   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:07:59.806810   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:59.806854   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:07:59.806898   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:59.806916   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:07:59.806920   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:59.806944   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:07:59.806989   92925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.ha-605114-m02 san=[127.0.0.1 192.168.49.3 ha-605114-m02 localhost minikube]
	I1213 19:07:59.961185   92925 provision.go:177] copyRemoteCerts
	I1213 19:07:59.961261   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:07:59.961306   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.986810   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:00.131955   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 19:08:00.132032   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:08:00.173539   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 19:08:00.173623   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 19:08:00.207894   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 19:08:00.207965   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 19:08:00.244666   92925 provision.go:87] duration metric: took 467.054938ms to configureAuth
	I1213 19:08:00.244712   92925 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:08:00.245918   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:08:00.246082   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:00.327171   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:08:00.327492   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:08:00.327508   92925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:08:01.970074   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:08:01.970150   92925 machine.go:97] duration metric: took 5.896940025s to provisionDockerMachine
	I1213 19:08:01.970177   92925 start.go:293] postStartSetup for "ha-605114-m02" (driver="docker")
	I1213 19:08:01.970221   92925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:08:01.970316   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:08:01.970411   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.009089   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.129494   92925 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:08:02.136549   92925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:08:02.136573   92925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:08:02.136585   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:08:02.136646   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:08:02.136728   92925 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:08:02.136734   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 19:08:02.136842   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:08:02.171248   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:08:02.216469   92925 start.go:296] duration metric: took 246.261152ms for postStartSetup
	I1213 19:08:02.216625   92925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:08:02.216685   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.262639   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.374718   92925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:08:02.380084   92925 fix.go:56] duration metric: took 6.780718951s for fixHost
	I1213 19:08:02.380108   92925 start.go:83] releasing machines lock for "ha-605114-m02", held for 6.780770726s
	I1213 19:08:02.380176   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:08:02.401071   92925 out.go:179] * Found network options:
	I1213 19:08:02.404164   92925 out.go:179]   - NO_PROXY=192.168.49.2
	W1213 19:08:02.407079   92925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1213 19:08:02.407127   92925 proxy.go:120] fail to check proxy env: Error ip not in block
	I1213 19:08:02.407198   92925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:08:02.407241   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.407257   92925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:08:02.407313   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.441677   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.462715   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.700903   92925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:08:02.788606   92925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:08:02.788680   92925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:08:02.802406   92925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 19:08:02.802471   92925 start.go:496] detecting cgroup driver to use...
	I1213 19:08:02.802520   92925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:08:02.802599   92925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:08:02.821557   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:08:02.843971   92925 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:08:02.844081   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:08:02.866953   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:08:02.884909   92925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:08:03.137948   92925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:08:03.363884   92925 docker.go:234] disabling docker service ...
	I1213 19:08:03.363990   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:08:03.388880   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:08:03.405597   92925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:08:03.645933   92925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:08:03.919704   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:08:03.941774   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:08:03.972913   92925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:08:03.973103   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:03.988083   92925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:08:03.988256   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.019667   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.031645   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.049709   92925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:08:04.086713   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.109181   92925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.119963   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.154436   92925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:08:04.170086   92925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:08:04.191001   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:08:04.484381   92925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:09:34.781930   92925 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.297515083s)
	I1213 19:09:34.781956   92925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:09:34.782006   92925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:09:34.785743   92925 start.go:564] Will wait 60s for crictl version
	I1213 19:09:34.785812   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:09:34.789353   92925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:09:34.818524   92925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:09:34.818612   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:09:34.852441   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:09:34.887257   92925 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 19:09:34.890293   92925 out.go:179]   - env NO_PROXY=192.168.49.2
	I1213 19:09:34.893426   92925 cli_runner.go:164] Run: docker network inspect ha-605114 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:09:34.911684   92925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:09:34.915601   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:09:34.925402   92925 mustload.go:66] Loading cluster: ha-605114
	I1213 19:09:34.925637   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:09:34.925900   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:09:34.944458   92925 host.go:66] Checking if "ha-605114" exists ...
	I1213 19:09:34.944731   92925 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114 for IP: 192.168.49.3
	I1213 19:09:34.944745   92925 certs.go:195] generating shared ca certs ...
	I1213 19:09:34.944760   92925 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:09:34.944889   92925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:09:34.944944   92925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:09:34.944957   92925 certs.go:257] generating profile certs ...
	I1213 19:09:34.945069   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key
	I1213 19:09:34.945157   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.29c07aea
	I1213 19:09:34.945202   92925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key
	I1213 19:09:34.945215   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 19:09:34.945230   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 19:09:34.945254   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 19:09:34.945266   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 19:09:34.945281   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 19:09:34.945294   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 19:09:34.945309   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 19:09:34.945328   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 19:09:34.945383   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:09:34.945424   92925 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:09:34.945446   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:09:34.945479   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:09:34.945508   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:09:34.945538   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:09:34.945583   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:09:34.945616   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:34.945632   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 19:09:34.945649   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 19:09:34.945719   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:09:34.963328   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:09:35.065324   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1213 19:09:35.069081   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1213 19:09:35.077819   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1213 19:09:35.081455   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1213 19:09:35.089763   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1213 19:09:35.093612   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1213 19:09:35.102260   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1213 19:09:35.106728   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1213 19:09:35.115519   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1213 19:09:35.119196   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1213 19:09:35.129001   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1213 19:09:35.132624   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1213 19:09:35.141653   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:09:35.161897   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:09:35.182131   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:09:35.202060   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:09:35.222310   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 19:09:35.243497   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:09:35.265517   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:09:35.284987   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:09:35.302971   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:09:35.320388   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:09:35.338865   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:09:35.356332   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1213 19:09:35.369616   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1213 19:09:35.383108   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1213 19:09:35.396928   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1213 19:09:35.410529   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1213 19:09:35.423162   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1213 19:09:35.436667   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1213 19:09:35.450451   92925 ssh_runner.go:195] Run: openssl version
	I1213 19:09:35.457142   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.464516   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:09:35.472169   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.475920   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.475984   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.516956   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:09:35.524426   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.532136   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:09:35.539767   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.543798   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.543906   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.586837   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:09:35.594791   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.602550   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:09:35.610984   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.614895   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.614973   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.661484   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:09:35.668847   92925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:09:35.672924   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 19:09:35.714926   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 19:09:35.757278   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 19:09:35.798060   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 19:09:35.840340   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 19:09:35.883228   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 19:09:35.926498   92925 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1213 19:09:35.926597   92925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-605114-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:09:35.926628   92925 kube-vip.go:115] generating kube-vip config ...
	I1213 19:09:35.926680   92925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1213 19:09:35.939407   92925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:09:35.939464   92925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 19:09:35.939538   92925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 19:09:35.948342   92925 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:09:35.948446   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1213 19:09:35.956523   92925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 19:09:35.970227   92925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:09:35.985384   92925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1213 19:09:36.004385   92925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1213 19:09:36.008483   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:09:36.019218   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:09:36.155982   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:09:36.170330   92925 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:09:36.170793   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:09:36.174251   92925 out.go:179] * Verifying Kubernetes components...
	I1213 19:09:36.177213   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:09:36.319740   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:09:36.334811   92925 kapi.go:59] client config for ha-605114: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1213 19:09:36.334886   92925 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1213 19:09:36.335095   92925 node_ready.go:35] waiting up to 6m0s for node "ha-605114-m02" to be "Ready" ...
	I1213 19:09:39.281934   92925 node_ready.go:49] node "ha-605114-m02" is "Ready"
	I1213 19:09:39.281962   92925 node_ready.go:38] duration metric: took 2.946847766s for node "ha-605114-m02" to be "Ready" ...
	I1213 19:09:39.281975   92925 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:09:39.282034   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:39.782149   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:40.282856   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:40.782144   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:41.282958   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:41.782581   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:42.282264   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:42.782257   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:43.283132   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:43.782112   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:44.282168   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:44.782088   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:45.282593   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:45.782122   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:46.282927   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:46.782182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:47.282980   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:47.783112   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:48.282633   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:48.782211   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:49.282732   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:49.782187   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:50.282735   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:50.782142   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:51.282519   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:51.782152   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:52.282197   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:52.782636   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:53.282768   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:53.782116   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:54.282300   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:54.782182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:55.282883   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:55.783092   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:56.282203   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:56.783098   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:57.282717   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:57.782189   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:58.282252   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:58.782909   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:59.282100   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:59.782310   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:00.289145   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:00.782212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:01.282192   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:01.782760   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:02.282108   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:02.782972   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:03.282353   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:03.782328   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:04.282366   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:04.782174   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:05.282835   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:05.782488   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:06.283036   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:06.782436   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:07.282292   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:07.782212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:08.283033   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:08.783070   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:09.282897   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:09.782668   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:10.282222   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:10.782267   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:11.282198   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:11.782837   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:12.282212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:12.783009   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:13.282406   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:13.782556   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:14.283140   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:14.782783   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:15.283077   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:15.783150   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:16.282934   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:16.783092   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:17.282186   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:17.782253   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:18.282771   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:18.782339   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:19.282255   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:19.782254   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:20.282346   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:20.782992   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:21.282270   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:21.782169   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:22.282176   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:22.782681   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:23.282402   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:23.783116   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:24.282118   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:24.782962   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:25.283031   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:25.783024   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:26.283105   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:26.782110   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:27.282833   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:27.782332   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:28.282978   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:28.782284   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:29.283095   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:29.782866   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:30.282438   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:30.782580   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:31.282697   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:31.783148   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:32.283119   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:32.782971   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:33.282108   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:33.783088   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:34.283075   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:34.782667   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:35.282868   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:35.782514   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:36.282200   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:36.282308   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:36.311092   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:36.311117   92925 cri.go:89] found id: ""
	I1213 19:10:36.311125   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:36.311180   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.314888   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:36.314970   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:36.342553   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:36.342573   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:36.342578   92925 cri.go:89] found id: ""
	I1213 19:10:36.342586   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:36.342655   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.346486   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.349986   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:36.350061   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:36.375198   92925 cri.go:89] found id: ""
	I1213 19:10:36.375262   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.375275   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:36.375281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:36.375350   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:36.406767   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:36.406789   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:36.406794   92925 cri.go:89] found id: ""
	I1213 19:10:36.406801   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:36.406857   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.410743   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.414390   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:36.414490   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:36.441810   92925 cri.go:89] found id: ""
	I1213 19:10:36.441833   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.441841   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:36.441848   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:36.441911   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:36.468354   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:36.468374   92925 cri.go:89] found id: ""
	I1213 19:10:36.468382   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:36.468436   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.472238   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:36.472316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:36.500356   92925 cri.go:89] found id: ""
	I1213 19:10:36.500383   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.500394   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:36.500404   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:36.500414   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:36.593811   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:36.593845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:36.607625   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:36.607656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:37.031907   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:37.023726    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.024402    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.025999    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.026604    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.028296    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:37.023726    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.024402    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.025999    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.026604    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.028296    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:37.031933   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:37.031948   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:37.057050   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:37.057079   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:37.097228   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:37.097262   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:37.148963   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:37.149014   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:37.217399   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:37.217436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:37.248174   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:37.248203   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:37.274722   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:37.274748   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:37.355342   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:37.355379   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:39.885413   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:39.896181   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:39.896250   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:39.928054   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:39.928078   92925 cri.go:89] found id: ""
	I1213 19:10:39.928087   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:39.928142   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.932690   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:39.932760   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:39.962089   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:39.962110   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:39.962114   92925 cri.go:89] found id: ""
	I1213 19:10:39.962122   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:39.962178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.966008   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.970141   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:39.970211   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:40.031915   92925 cri.go:89] found id: ""
	I1213 19:10:40.031938   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.031947   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:40.031954   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:40.032022   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:40.075124   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:40.075145   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:40.075150   92925 cri.go:89] found id: ""
	I1213 19:10:40.075157   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:40.075216   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.079588   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.083956   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:40.084077   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:40.120592   92925 cri.go:89] found id: ""
	I1213 19:10:40.120623   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.120633   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:40.120640   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:40.120707   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:40.162573   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:40.162599   92925 cri.go:89] found id: ""
	I1213 19:10:40.162620   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:40.162692   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.167731   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:40.167810   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:40.197646   92925 cri.go:89] found id: ""
	I1213 19:10:40.197681   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.197692   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:40.197701   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:40.197714   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:40.279428   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:40.270096    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.270945    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.271678    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.273521    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.274072    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:40.270096    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.270945    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.271678    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.273521    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.274072    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:40.279462   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:40.279476   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:40.317833   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:40.317867   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:40.365303   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:40.365339   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:40.391972   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:40.392006   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:40.467785   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:40.467824   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:40.499555   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:40.499587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:40.601537   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:40.601571   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:40.614326   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:40.614357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:40.643794   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:40.643823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:40.696205   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:40.696242   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.224045   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:43.234786   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:43.234854   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:43.262459   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:43.262481   92925 cri.go:89] found id: ""
	I1213 19:10:43.262489   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:43.262544   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.267289   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:43.267362   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:43.294825   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:43.294846   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:43.294858   92925 cri.go:89] found id: ""
	I1213 19:10:43.294873   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:43.294931   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.298717   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.302500   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:43.302576   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:43.328978   92925 cri.go:89] found id: ""
	I1213 19:10:43.329001   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.329048   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:43.329055   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:43.329115   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:43.358394   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:43.358419   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.358426   92925 cri.go:89] found id: ""
	I1213 19:10:43.358434   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:43.358544   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.363176   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.366906   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:43.366996   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:43.396556   92925 cri.go:89] found id: ""
	I1213 19:10:43.396583   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.396592   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:43.396598   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:43.396657   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:43.422776   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:43.422803   92925 cri.go:89] found id: ""
	I1213 19:10:43.422813   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:43.422886   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.426512   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:43.426579   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:43.452942   92925 cri.go:89] found id: ""
	I1213 19:10:43.452966   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.452975   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:43.452984   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:43.452996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:43.479637   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:43.479708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:43.492492   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:43.492521   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:43.555898   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:43.555930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.583059   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:43.583089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:43.665528   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:43.665562   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:43.713108   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:43.713136   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:43.817894   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:43.817930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:43.900953   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:43.892916    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.893797    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895356    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895650    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.897247    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:43.892916    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.893797    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895356    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895650    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.897247    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:43.900978   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:43.900992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:43.928040   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:43.928067   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:43.989295   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:43.989349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:46.551759   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:46.562922   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:46.562999   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:46.590576   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:46.590607   92925 cri.go:89] found id: ""
	I1213 19:10:46.590615   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:46.590669   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.594481   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:46.594557   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:46.619444   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:46.619466   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:46.619472   92925 cri.go:89] found id: ""
	I1213 19:10:46.619480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:46.619562   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.623350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.626652   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:46.626726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:46.655019   92925 cri.go:89] found id: ""
	I1213 19:10:46.655045   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.655055   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:46.655061   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:46.655119   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:46.685081   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:46.685108   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:46.685113   92925 cri.go:89] found id: ""
	I1213 19:10:46.685121   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:46.685178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.689664   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.693381   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:46.693455   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:46.719871   92925 cri.go:89] found id: ""
	I1213 19:10:46.719897   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.719906   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:46.719914   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:46.719979   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:46.747153   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:46.747176   92925 cri.go:89] found id: ""
	I1213 19:10:46.747184   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:46.747239   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.751093   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:46.751198   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:46.777729   92925 cri.go:89] found id: ""
	I1213 19:10:46.777800   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.777816   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:46.777827   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:46.777840   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:46.807286   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:46.807315   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:46.900226   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:46.900266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:46.913850   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:46.913877   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:46.995097   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:46.986432    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.987537    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.988185    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.989944    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.990430    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:46.986432    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.987537    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.988185    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.989944    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.990430    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:46.995121   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:46.995146   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:47.020980   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:47.021038   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:47.062312   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:47.062348   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:47.143840   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:47.143916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:47.176420   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:47.176455   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:47.221958   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:47.222003   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:47.276308   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:47.276349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:49.804769   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:49.815535   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:49.815609   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:49.841153   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:49.841227   92925 cri.go:89] found id: ""
	I1213 19:10:49.841258   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:49.841341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.844798   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:49.844903   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:49.872086   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:49.872111   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:49.872117   92925 cri.go:89] found id: ""
	I1213 19:10:49.872124   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:49.872178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.875975   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.879817   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:49.879892   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:49.918961   92925 cri.go:89] found id: ""
	I1213 19:10:49.918987   92925 logs.go:282] 0 containers: []
	W1213 19:10:49.918996   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:49.919002   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:49.919059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:49.959969   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:49.959994   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:49.959999   92925 cri.go:89] found id: ""
	I1213 19:10:49.960007   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:49.960063   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.964635   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.969140   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:49.969208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:50.006023   92925 cri.go:89] found id: ""
	I1213 19:10:50.006049   92925 logs.go:282] 0 containers: []
	W1213 19:10:50.006058   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:50.006064   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:50.006143   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:50.040945   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:50.040965   92925 cri.go:89] found id: ""
	I1213 19:10:50.040973   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:50.041060   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:50.044991   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:50.045100   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:50.073352   92925 cri.go:89] found id: ""
	I1213 19:10:50.073383   92925 logs.go:282] 0 containers: []
	W1213 19:10:50.073409   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:50.073420   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:50.073437   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:50.092169   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:50.092219   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:50.167681   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:50.167719   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:50.220989   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:50.221028   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:50.252059   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:50.252091   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:50.358508   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:50.358555   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:50.434424   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:50.426219    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.426850    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.428449    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.429020    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.430880    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:50.426219    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.426850    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.428449    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.429020    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.430880    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:50.434452   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:50.434467   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:50.458963   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:50.458992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:50.516376   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:50.516410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:50.543978   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:50.544009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:50.619429   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:50.619468   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:53.153421   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:53.163979   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:53.164048   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:53.191198   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:53.191259   92925 cri.go:89] found id: ""
	I1213 19:10:53.191291   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:53.191363   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.195132   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:53.195204   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:53.222253   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:53.222276   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:53.222280   92925 cri.go:89] found id: ""
	I1213 19:10:53.222287   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:53.222370   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.226176   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.229762   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:53.229878   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:53.260062   92925 cri.go:89] found id: ""
	I1213 19:10:53.260088   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.260096   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:53.260103   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:53.260159   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:53.289940   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:53.290005   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:53.290024   92925 cri.go:89] found id: ""
	I1213 19:10:53.290037   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:53.290106   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.293745   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.297116   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:53.297199   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:53.324233   92925 cri.go:89] found id: ""
	I1213 19:10:53.324259   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.324268   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:53.324274   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:53.324329   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:53.355230   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:53.355252   92925 cri.go:89] found id: ""
	I1213 19:10:53.355260   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:53.355312   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.358865   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:53.358932   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:53.388377   92925 cri.go:89] found id: ""
	I1213 19:10:53.388460   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.388486   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:53.388531   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:53.388561   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:53.482197   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:53.482233   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:53.495635   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:53.495666   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:53.527174   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:53.527201   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:53.568473   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:53.568509   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:53.613038   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:53.613068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:53.666213   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:53.666248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:53.746993   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:53.747031   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:53.777726   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:53.777758   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:53.849162   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:53.840835    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.841725    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.842564    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844081    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844396    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:53.840835    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.841725    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.842564    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844081    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844396    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:53.849193   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:53.849207   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:53.879522   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:53.879551   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.408599   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:56.420063   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:56.420130   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:56.446598   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:56.446622   92925 cri.go:89] found id: ""
	I1213 19:10:56.446630   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:56.446691   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.450451   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:56.450519   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:56.477437   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:56.477460   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:56.477465   92925 cri.go:89] found id: ""
	I1213 19:10:56.477472   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:56.477560   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.481341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.484891   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:56.484963   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:56.513437   92925 cri.go:89] found id: ""
	I1213 19:10:56.513459   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.513467   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:56.513473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:56.513531   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:56.542772   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:56.542812   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:56.542818   92925 cri.go:89] found id: ""
	I1213 19:10:56.542845   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:56.542930   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.546773   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.550355   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:56.550430   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:56.577663   92925 cri.go:89] found id: ""
	I1213 19:10:56.577687   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.577695   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:56.577701   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:56.577811   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:56.604755   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.604827   92925 cri.go:89] found id: ""
	I1213 19:10:56.604849   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:56.604945   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.608549   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:56.608618   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:56.635735   92925 cri.go:89] found id: ""
	I1213 19:10:56.635759   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.635767   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:56.635777   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:56.635789   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:56.729353   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:56.729388   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:56.741845   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:56.741874   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:56.815151   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:56.806729    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.807450    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.808916    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.809436    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.811611    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:56.806729    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.807450    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.808916    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.809436    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.811611    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:56.815178   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:56.815193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:56.871711   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:56.871748   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.904003   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:56.904034   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:56.941519   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:56.941549   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:56.974994   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:56.975022   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:57.015259   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:57.015290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:57.059492   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:57.059527   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:57.085661   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:57.085690   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:59.675412   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:59.686117   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:59.686192   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:59.710921   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:59.710951   92925 cri.go:89] found id: ""
	I1213 19:10:59.710960   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:59.711015   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.714894   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:59.715008   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:59.742170   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:59.742193   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:59.742199   92925 cri.go:89] found id: ""
	I1213 19:10:59.742206   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:59.742261   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.746138   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.750866   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:59.750942   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:59.777917   92925 cri.go:89] found id: ""
	I1213 19:10:59.777943   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.777951   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:59.777957   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:59.778015   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:59.803883   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:59.803903   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:59.803908   92925 cri.go:89] found id: ""
	I1213 19:10:59.803916   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:59.803971   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.807903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.811388   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:59.811453   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:59.837952   92925 cri.go:89] found id: ""
	I1213 19:10:59.837977   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.837986   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:59.837992   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:59.838048   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:59.864431   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:59.864490   92925 cri.go:89] found id: ""
	I1213 19:10:59.864512   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:59.864594   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.869272   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:59.869345   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:59.896571   92925 cri.go:89] found id: ""
	I1213 19:10:59.896603   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.896612   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:59.896622   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:59.896634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:59.997222   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:59.997313   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:00.122051   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:00.122166   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:00.334228   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:00.323858    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.324625    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326029    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326896    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.328835    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:00.323858    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.324625    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326029    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326896    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.328835    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:00.334270   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:00.334284   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:00.397345   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:00.397381   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:00.460082   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:00.460118   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:00.507030   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:00.507068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:00.561579   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:00.561611   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:00.590319   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:00.590346   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:00.618590   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:00.618617   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:00.700620   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:00.700655   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:03.247538   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:03.260650   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:03.260720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:03.296710   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:03.296736   92925 cri.go:89] found id: ""
	I1213 19:11:03.296744   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:03.296804   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.300974   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:03.301083   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:03.332989   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:03.333019   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:03.333024   92925 cri.go:89] found id: ""
	I1213 19:11:03.333031   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:03.333085   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.337959   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.341569   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:03.341642   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:03.367805   92925 cri.go:89] found id: ""
	I1213 19:11:03.367831   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.367840   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:03.367847   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:03.367910   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:03.396144   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:03.396165   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:03.396170   92925 cri.go:89] found id: ""
	I1213 19:11:03.396177   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:03.396234   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.400643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.404350   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:03.404422   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:03.431472   92925 cri.go:89] found id: ""
	I1213 19:11:03.431498   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.431508   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:03.431520   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:03.431602   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:03.459968   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:03.460034   92925 cri.go:89] found id: ""
	I1213 19:11:03.460058   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:03.460134   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.464138   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:03.464230   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:03.491871   92925 cri.go:89] found id: ""
	I1213 19:11:03.491897   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.491906   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:03.491916   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:03.491928   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:03.528376   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:03.528451   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:03.562095   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:03.562124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:03.575381   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:03.575410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:03.602586   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:03.602615   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:03.651880   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:03.651912   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:03.708104   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:03.708142   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:03.736240   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:03.736268   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:03.814277   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:03.814314   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:03.920505   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:03.920542   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:04.025281   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:04.014467    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.015603    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.016913    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.017960    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.019083    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:04.014467    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.015603    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.016913    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.017960    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.019083    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:04.025308   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:04.025326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.584492   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:06.595822   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:06.595900   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:06.627891   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:06.627917   92925 cri.go:89] found id: ""
	I1213 19:11:06.627925   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:06.627982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.632107   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:06.632184   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:06.657896   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:06.657921   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.657926   92925 cri.go:89] found id: ""
	I1213 19:11:06.657934   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:06.657989   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.661493   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.665545   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:06.665611   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:06.696673   92925 cri.go:89] found id: ""
	I1213 19:11:06.696748   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.696773   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:06.696792   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:06.696879   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:06.724330   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:06.724355   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:06.724360   92925 cri.go:89] found id: ""
	I1213 19:11:06.724368   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:06.724422   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.728040   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.731506   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:06.731610   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:06.756515   92925 cri.go:89] found id: ""
	I1213 19:11:06.756578   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.756601   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:06.756622   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:06.756700   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:06.783035   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:06.783094   92925 cri.go:89] found id: ""
	I1213 19:11:06.783117   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:06.783184   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.787082   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:06.787158   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:06.813991   92925 cri.go:89] found id: ""
	I1213 19:11:06.814014   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.814022   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:06.814031   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:06.814043   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.860023   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:06.860057   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:06.915266   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:06.915303   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:07.005436   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:07.005480   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:07.041558   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:07.041591   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:07.055111   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:07.055140   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:07.085506   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:07.085534   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:07.140042   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:07.140080   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:07.170267   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:07.170300   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:07.197645   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:07.197676   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:07.298125   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:07.298167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:07.368495   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:07.358879    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.359581    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361161    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361458    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.363677    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:07.358879    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.359581    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361161    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361458    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.363677    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:09.868760   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:09.879760   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:09.879831   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:09.907241   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:09.907264   92925 cri.go:89] found id: ""
	I1213 19:11:09.907272   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:09.907331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.910883   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:09.910954   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:09.936137   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:09.936156   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:09.936161   92925 cri.go:89] found id: ""
	I1213 19:11:09.936167   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:09.936222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.940048   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.951154   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:09.951222   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:09.985435   92925 cri.go:89] found id: ""
	I1213 19:11:09.985520   92925 logs.go:282] 0 containers: []
	W1213 19:11:09.985532   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:09.985540   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:09.985648   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:10.028412   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:10.028487   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:10.028521   92925 cri.go:89] found id: ""
	I1213 19:11:10.028549   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:10.028643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.035436   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.040716   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:10.040834   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:10.070216   92925 cri.go:89] found id: ""
	I1213 19:11:10.070245   92925 logs.go:282] 0 containers: []
	W1213 19:11:10.070255   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:10.070261   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:10.070323   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:10.107151   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:10.107174   92925 cri.go:89] found id: ""
	I1213 19:11:10.107183   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:10.107241   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.111700   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:10.111773   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:10.148889   92925 cri.go:89] found id: ""
	I1213 19:11:10.148913   92925 logs.go:282] 0 containers: []
	W1213 19:11:10.148922   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:10.148931   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:10.148946   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:10.183850   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:10.183953   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:10.284535   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:10.284572   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:10.361456   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:10.353378    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.354229    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.355719    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.356209    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.357653    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:10.353378    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.354229    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.355719    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.356209    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.357653    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:10.361521   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:10.361543   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:10.401195   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:10.401230   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:10.466771   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:10.466806   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:10.492988   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:10.493041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:10.506114   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:10.506143   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:10.534614   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:10.534643   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:10.589313   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:10.589346   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:10.621617   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:10.621646   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:13.202940   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:13.214007   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:13.214076   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:13.241311   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:13.241334   92925 cri.go:89] found id: ""
	I1213 19:11:13.241342   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:13.241399   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.244857   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:13.244973   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:13.271246   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:13.271272   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:13.271277   92925 cri.go:89] found id: ""
	I1213 19:11:13.271284   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:13.271368   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.275204   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.278868   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:13.278941   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:13.306334   92925 cri.go:89] found id: ""
	I1213 19:11:13.306365   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.306373   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:13.306379   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:13.306440   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:13.332388   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:13.332407   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:13.332412   92925 cri.go:89] found id: ""
	I1213 19:11:13.332419   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:13.332474   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.336618   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.340235   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:13.340305   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:13.366487   92925 cri.go:89] found id: ""
	I1213 19:11:13.366522   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.366531   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:13.366537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:13.366597   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:13.397475   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:13.397496   92925 cri.go:89] found id: ""
	I1213 19:11:13.397504   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:13.397565   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.401266   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:13.401377   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:13.430168   92925 cri.go:89] found id: ""
	I1213 19:11:13.430196   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.430205   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:13.430221   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:13.430235   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:13.496086   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:13.486609    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.487472    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489304    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489961    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.491916    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:13.486609    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.487472    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489304    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489961    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.491916    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:13.496111   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:13.496124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:13.548378   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:13.548413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:13.601861   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:13.601899   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:13.634165   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:13.634193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:13.662242   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:13.662270   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:13.737810   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:13.737846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:13.770540   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:13.770574   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:13.783830   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:13.783907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:13.810122   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:13.810149   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:13.856452   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:13.856485   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:16.448594   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:16.459829   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:16.459900   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:16.489717   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:16.489737   92925 cri.go:89] found id: ""
	I1213 19:11:16.489745   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:16.489799   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.494205   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:16.494290   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:16.529314   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:16.529336   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:16.529340   92925 cri.go:89] found id: ""
	I1213 19:11:16.529349   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:16.529404   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.533136   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.536814   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:16.536887   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:16.563026   92925 cri.go:89] found id: ""
	I1213 19:11:16.563064   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.563073   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:16.563079   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:16.563139   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:16.594519   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:16.594541   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:16.594546   92925 cri.go:89] found id: ""
	I1213 19:11:16.594554   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:16.594611   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.598288   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.601875   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:16.601946   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:16.628577   92925 cri.go:89] found id: ""
	I1213 19:11:16.628603   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.628612   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:16.628618   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:16.628676   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:16.656978   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:16.657001   92925 cri.go:89] found id: ""
	I1213 19:11:16.657039   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:16.657095   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.661124   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:16.661236   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:16.695697   92925 cri.go:89] found id: ""
	I1213 19:11:16.695731   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.695739   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:16.695748   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:16.695760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:16.766672   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:16.757776    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.758599    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760229    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760563    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.762386    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:16.757776    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.758599    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760229    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760563    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.762386    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:16.766696   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:16.766709   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:16.808187   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:16.808237   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:16.850027   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:16.850062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:16.906135   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:16.906174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:16.935630   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:16.935661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:16.963433   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:16.963463   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:17.045818   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:17.045852   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:17.079053   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:17.079080   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:17.186217   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:17.186251   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:17.198725   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:17.198760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:19.727394   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:19.738364   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:19.738431   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:19.768160   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:19.768183   92925 cri.go:89] found id: ""
	I1213 19:11:19.768196   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:19.768252   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.772004   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:19.772128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:19.799342   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:19.799368   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:19.799374   92925 cri.go:89] found id: ""
	I1213 19:11:19.799382   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:19.799466   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.803455   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.807247   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:19.807340   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:19.835979   92925 cri.go:89] found id: ""
	I1213 19:11:19.836005   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.836014   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:19.836021   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:19.836081   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:19.864302   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:19.864325   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:19.864331   92925 cri.go:89] found id: ""
	I1213 19:11:19.864338   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:19.864397   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.868104   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.871725   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:19.871812   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:19.899890   92925 cri.go:89] found id: ""
	I1213 19:11:19.899919   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.899937   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:19.899944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:19.900012   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:19.927600   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:19.927624   92925 cri.go:89] found id: ""
	I1213 19:11:19.927632   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:19.927685   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.931424   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:19.931509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:19.961424   92925 cri.go:89] found id: ""
	I1213 19:11:19.961454   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.961469   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:19.961479   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:19.961492   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:20.002155   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:20.002284   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:20.082123   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:20.071968    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.072791    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.075159    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.076013    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.077851    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:20.071968    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.072791    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.075159    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.076013    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.077851    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:20.082148   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:20.082162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:20.127578   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:20.127614   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:20.174673   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:20.174713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:20.204713   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:20.204791   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:20.282989   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:20.283026   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:20.327361   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:20.327436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:20.427993   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:20.428032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:20.442295   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:20.442326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:20.471477   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:20.471510   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.025659   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:23.036724   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:23.036796   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:23.064245   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:23.064269   92925 cri.go:89] found id: ""
	I1213 19:11:23.064281   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:23.064341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.068194   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:23.068269   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:23.097592   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:23.097616   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:23.097622   92925 cri.go:89] found id: ""
	I1213 19:11:23.097629   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:23.097692   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.104525   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.110378   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:23.110459   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:23.144932   92925 cri.go:89] found id: ""
	I1213 19:11:23.144958   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.144966   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:23.144972   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:23.145063   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:23.177104   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.177129   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:23.177134   92925 cri.go:89] found id: ""
	I1213 19:11:23.177142   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:23.177197   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.181178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.185904   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:23.185988   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:23.213662   92925 cri.go:89] found id: ""
	I1213 19:11:23.213740   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.213765   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:23.213784   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:23.213891   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:23.244233   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:23.244298   92925 cri.go:89] found id: ""
	I1213 19:11:23.244322   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:23.244413   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.248148   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:23.248228   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:23.276740   92925 cri.go:89] found id: ""
	I1213 19:11:23.276765   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.276773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:23.276784   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:23.276796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.336420   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:23.336453   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:23.368543   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:23.368572   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:23.450730   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:23.450772   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:23.483510   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:23.483550   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:23.628675   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:23.619033    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.620672    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.621438    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623126    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623775    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:23.619033    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.620672    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.621438    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623126    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623775    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:23.628699   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:23.628713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:23.665846   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:23.665882   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:23.713922   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:23.713959   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:23.752354   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:23.752384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:23.858109   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:23.858150   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:23.871373   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:23.871404   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.419535   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:26.430634   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:26.430705   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:26.458628   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:26.458650   92925 cri.go:89] found id: ""
	I1213 19:11:26.458661   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:26.458716   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.462422   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:26.462495   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:26.490349   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.490389   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:26.490394   92925 cri.go:89] found id: ""
	I1213 19:11:26.490401   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:26.490468   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.494405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.498636   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:26.498716   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:26.528607   92925 cri.go:89] found id: ""
	I1213 19:11:26.528637   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.528646   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:26.528653   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:26.528722   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:26.558710   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:26.558733   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:26.558741   92925 cri.go:89] found id: ""
	I1213 19:11:26.558748   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:26.558825   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.562803   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.566707   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:26.566808   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:26.596729   92925 cri.go:89] found id: ""
	I1213 19:11:26.596754   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.596763   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:26.596769   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:26.596826   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:26.624054   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:26.624077   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:26.624083   92925 cri.go:89] found id: ""
	I1213 19:11:26.624090   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:26.624167   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.628449   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.632716   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:26.632822   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:26.659170   92925 cri.go:89] found id: ""
	I1213 19:11:26.659195   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.659204   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:26.659213   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:26.659226   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:26.694272   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:26.694300   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:26.720924   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:26.720959   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:26.751980   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:26.752009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:26.824509   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:26.824547   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:26.855705   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:26.855733   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:26.867403   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:26.867431   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.906787   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:26.906823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:26.951319   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:26.951351   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:27.006541   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:27.006579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:27.033554   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:27.033583   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:27.135230   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:27.135266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:27.210106   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:27.201700    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.202413    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.203893    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.204311    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.205969    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:27.201700    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.202413    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.203893    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.204311    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.205969    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:29.711829   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:29.723531   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:29.723601   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:29.753961   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:29.753984   92925 cri.go:89] found id: ""
	I1213 19:11:29.753992   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:29.754050   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.757806   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:29.757873   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:29.783149   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:29.783181   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:29.783186   92925 cri.go:89] found id: ""
	I1213 19:11:29.783194   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:29.783263   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.787082   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.790979   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:29.791109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:29.817959   92925 cri.go:89] found id: ""
	I1213 19:11:29.817985   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.817994   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:29.818000   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:29.818060   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:29.846235   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:29.846257   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:29.846262   92925 cri.go:89] found id: ""
	I1213 19:11:29.846270   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:29.846351   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.849953   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.853572   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:29.853692   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:29.879800   92925 cri.go:89] found id: ""
	I1213 19:11:29.879834   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.879843   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:29.879850   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:29.879915   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:29.907082   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:29.907116   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:29.907121   92925 cri.go:89] found id: ""
	I1213 19:11:29.907128   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:29.907192   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.910914   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.914566   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:29.914651   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:29.939124   92925 cri.go:89] found id: ""
	I1213 19:11:29.939149   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.939158   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:29.939168   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:29.939205   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:29.981605   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:29.981639   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:30.089079   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:30.089116   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:30.156090   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:30.156124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:30.186549   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:30.186580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:30.214921   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:30.214950   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:30.242668   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:30.242697   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:30.319413   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:30.319445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:30.419178   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:30.419215   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:30.431724   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:30.431753   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:30.501053   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:30.492849    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.493577    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495362    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495976    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.497562    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:30.492849    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.493577    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495362    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495976    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.497562    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:30.501078   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:30.501092   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:30.532550   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:30.532577   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:33.076374   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:33.087831   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:33.087899   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:33.126218   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:33.126241   92925 cri.go:89] found id: ""
	I1213 19:11:33.126251   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:33.126315   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.130647   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:33.130731   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:33.158982   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:33.159013   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:33.159020   92925 cri.go:89] found id: ""
	I1213 19:11:33.159028   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:33.159094   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.162984   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.166562   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:33.166635   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:33.193330   92925 cri.go:89] found id: ""
	I1213 19:11:33.193353   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.193361   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:33.193367   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:33.193423   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:33.221129   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:33.221153   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:33.221159   92925 cri.go:89] found id: ""
	I1213 19:11:33.221166   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:33.221239   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.225797   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.229503   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:33.229615   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:33.257761   92925 cri.go:89] found id: ""
	I1213 19:11:33.257786   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.257795   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:33.257802   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:33.257865   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:33.285915   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:33.285941   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:33.285957   92925 cri.go:89] found id: ""
	I1213 19:11:33.285968   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:33.286026   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.289819   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.293581   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:33.293655   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:33.324324   92925 cri.go:89] found id: ""
	I1213 19:11:33.324348   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.324357   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:33.324366   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:33.324377   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:33.350842   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:33.350913   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:33.424344   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:33.424380   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:33.452897   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:33.452930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:33.504468   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:33.504506   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:33.579150   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:33.579183   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:33.607049   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:33.607076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:33.633297   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:33.633326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:33.668670   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:33.668699   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:33.766904   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:33.766936   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:33.780538   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:33.780567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:33.857253   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:33.848822    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.849778    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851312    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851759    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.853392    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:33.848822    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.849778    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851312    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851759    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.853392    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:33.857275   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:33.857290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.398970   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:36.410341   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:36.410416   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:36.438456   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:36.438479   92925 cri.go:89] found id: ""
	I1213 19:11:36.438488   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:36.438568   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.442320   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:36.442395   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:36.470092   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.470116   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:36.470121   92925 cri.go:89] found id: ""
	I1213 19:11:36.470131   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:36.470218   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.474021   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.477467   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:36.477578   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:36.505647   92925 cri.go:89] found id: ""
	I1213 19:11:36.505670   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.505714   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:36.505733   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:36.505804   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:36.537872   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:36.537895   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:36.537900   92925 cri.go:89] found id: ""
	I1213 19:11:36.537907   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:36.537961   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.541660   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.545244   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:36.545314   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:36.570195   92925 cri.go:89] found id: ""
	I1213 19:11:36.570228   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.570238   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:36.570250   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:36.570339   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:36.595894   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:36.595958   92925 cri.go:89] found id: ""
	I1213 19:11:36.595979   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:36.596064   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.599675   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:36.599789   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:36.624988   92925 cri.go:89] found id: ""
	I1213 19:11:36.625083   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.625101   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:36.625112   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:36.625123   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:36.718891   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:36.718924   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:36.786494   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:36.778476    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.779141    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.780744    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.781242    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.782695    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:36.778476    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.779141    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.780744    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.781242    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.782695    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:36.786519   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:36.786531   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.828295   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:36.828328   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:36.871560   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:36.871591   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:36.941295   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:36.941335   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:37.023869   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:37.023902   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:37.055672   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:37.055700   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:37.069301   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:37.069334   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:37.098989   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:37.099015   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:37.135738   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:37.135771   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:39.664114   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:39.675928   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:39.675999   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:39.702971   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:39.702989   92925 cri.go:89] found id: ""
	I1213 19:11:39.702998   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:39.703053   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.707021   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:39.707096   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:39.733615   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:39.733637   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:39.733642   92925 cri.go:89] found id: ""
	I1213 19:11:39.733663   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:39.733720   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.737520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.740992   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:39.741107   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:39.769090   92925 cri.go:89] found id: ""
	I1213 19:11:39.769174   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.769194   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:39.769201   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:39.769351   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:39.804293   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:39.804314   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:39.804319   92925 cri.go:89] found id: ""
	I1213 19:11:39.804326   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:39.804389   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.808495   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.812181   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:39.812255   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:39.838217   92925 cri.go:89] found id: ""
	I1213 19:11:39.838243   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.838252   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:39.838259   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:39.838314   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:39.866484   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:39.866504   92925 cri.go:89] found id: ""
	I1213 19:11:39.866512   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:39.866567   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.870814   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:39.870885   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:39.908207   92925 cri.go:89] found id: ""
	I1213 19:11:39.908233   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.908243   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:39.908252   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:39.908264   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:39.920472   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:39.920499   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:39.948910   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:39.948951   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:40.012782   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:40.012825   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:40.047267   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:40.047297   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:40.129790   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:40.129871   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:40.168487   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:40.168519   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:40.269381   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:40.269456   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:40.338885   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:40.330165    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.330955    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333137    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333832    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.335154    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:40.330165    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.330955    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333137    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333832    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.335154    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:40.338906   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:40.338919   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:40.394986   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:40.395024   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:40.460751   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:40.460799   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:42.992519   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:43.004031   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:43.004110   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:43.032556   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:43.032578   92925 cri.go:89] found id: ""
	I1213 19:11:43.032586   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:43.032640   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.036332   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:43.036401   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:43.065252   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:43.065282   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:43.065288   92925 cri.go:89] found id: ""
	I1213 19:11:43.065296   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:43.065358   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.070007   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.074047   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:43.074122   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:43.108141   92925 cri.go:89] found id: ""
	I1213 19:11:43.108169   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.108181   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:43.108188   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:43.108248   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:43.139539   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:43.139560   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:43.139566   92925 cri.go:89] found id: ""
	I1213 19:11:43.139574   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:43.139629   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.143534   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.147218   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:43.147292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:43.175751   92925 cri.go:89] found id: ""
	I1213 19:11:43.175825   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.175849   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:43.175868   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:43.175952   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:43.200994   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:43.201062   92925 cri.go:89] found id: ""
	I1213 19:11:43.201072   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:43.201127   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.204988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:43.205128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:43.231895   92925 cri.go:89] found id: ""
	I1213 19:11:43.231922   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.231946   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:43.231955   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:43.231968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:43.272192   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:43.272228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:43.334615   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:43.334650   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:43.366125   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:43.366153   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:43.397225   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:43.397254   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:43.468828   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:43.460439    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.461076    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.462731    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.463290    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.464964    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:43.460439    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.461076    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.462731    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.463290    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.464964    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:43.468856   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:43.468869   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:43.519337   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:43.519376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:43.552934   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:43.552963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:43.636492   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:43.636526   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:43.735496   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:43.735529   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:43.748666   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:43.748693   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:46.276009   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:46.287459   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:46.287539   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:46.315787   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:46.315809   92925 cri.go:89] found id: ""
	I1213 19:11:46.315817   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:46.315881   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.319776   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:46.319870   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:46.349638   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:46.349701   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:46.349721   92925 cri.go:89] found id: ""
	I1213 19:11:46.349737   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:46.349810   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.353770   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.357319   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:46.357391   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:46.387852   92925 cri.go:89] found id: ""
	I1213 19:11:46.387879   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.387888   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:46.387895   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:46.387956   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:46.415327   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:46.415351   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:46.415362   92925 cri.go:89] found id: ""
	I1213 19:11:46.415369   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:46.415425   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.420351   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.423877   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:46.423945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:46.452445   92925 cri.go:89] found id: ""
	I1213 19:11:46.452471   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.452480   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:46.452487   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:46.452543   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:46.488306   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:46.488329   92925 cri.go:89] found id: ""
	I1213 19:11:46.488337   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:46.488393   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.492372   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:46.492477   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:46.531601   92925 cri.go:89] found id: ""
	I1213 19:11:46.531625   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.531635   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:46.531644   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:46.531656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:46.576619   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:46.576653   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:46.637968   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:46.638005   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:46.666074   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:46.666103   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:46.699911   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:46.699988   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:46.741837   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:46.741889   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:46.771703   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:46.771729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:46.848202   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:46.848240   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:46.949628   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:46.949664   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:46.963040   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:46.963071   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:47.045784   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:47.037108    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.038507    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.039621    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.040561    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.042097    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:47.037108    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.038507    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.039621    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.040561    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.042097    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:47.045805   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:47.045818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.573745   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:49.584944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:49.585049   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:49.612421   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.612440   92925 cri.go:89] found id: ""
	I1213 19:11:49.612448   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:49.612503   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.616771   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:49.616842   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:49.644250   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:49.644313   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:49.644342   92925 cri.go:89] found id: ""
	I1213 19:11:49.644365   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:49.644448   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.648357   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.652087   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:49.652211   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:49.678765   92925 cri.go:89] found id: ""
	I1213 19:11:49.678790   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.678798   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:49.678804   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:49.678882   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:49.707013   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:49.707082   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:49.707102   92925 cri.go:89] found id: ""
	I1213 19:11:49.707128   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:49.707219   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.711513   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.715226   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:49.715321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:49.741306   92925 cri.go:89] found id: ""
	I1213 19:11:49.741375   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.741401   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:49.741421   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:49.741505   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:49.768427   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:49.768451   92925 cri.go:89] found id: ""
	I1213 19:11:49.768459   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:49.768517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.772356   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:49.772478   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:49.801564   92925 cri.go:89] found id: ""
	I1213 19:11:49.801633   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.801659   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:49.801687   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:49.801725   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.827233   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:49.827261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:49.884809   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:49.884846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:49.911980   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:49.912011   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:49.938143   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:49.938174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:49.951851   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:49.951880   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:49.992816   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:49.992861   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:50.064112   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:50.064149   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:50.149808   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:50.149847   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:50.182876   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:50.182907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:50.285831   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:50.285868   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:50.357682   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:50.350098    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.350586    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.351793    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.352420    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.354169    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:50.350098    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.350586    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.351793    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.352420    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.354169    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:52.858319   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:52.869473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:52.869548   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:52.897144   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:52.897169   92925 cri.go:89] found id: ""
	I1213 19:11:52.897177   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:52.897234   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.900973   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:52.901074   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:52.928815   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:52.928842   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:52.928847   92925 cri.go:89] found id: ""
	I1213 19:11:52.928855   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:52.928912   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.932785   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.936853   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:52.936928   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:52.963913   92925 cri.go:89] found id: ""
	I1213 19:11:52.963940   92925 logs.go:282] 0 containers: []
	W1213 19:11:52.963949   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:52.963954   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:52.964018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:52.993621   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:52.993685   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:52.993705   92925 cri.go:89] found id: ""
	I1213 19:11:52.993730   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:52.993820   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.997612   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:53.001214   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:53.001293   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:53.032707   92925 cri.go:89] found id: ""
	I1213 19:11:53.032733   92925 logs.go:282] 0 containers: []
	W1213 19:11:53.032742   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:53.032749   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:53.032812   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:53.059757   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:53.059780   92925 cri.go:89] found id: ""
	I1213 19:11:53.059805   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:53.059860   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:53.063600   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:53.063673   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:53.091179   92925 cri.go:89] found id: ""
	I1213 19:11:53.091248   92925 logs.go:282] 0 containers: []
	W1213 19:11:53.091286   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:53.091303   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:53.091316   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:53.123301   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:53.123391   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:53.196598   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:53.196634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:53.227689   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:53.227715   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:53.327870   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:53.327905   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:53.343261   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:53.343290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:53.371058   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:53.371089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:53.418862   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:53.418896   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:53.475787   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:53.475822   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:53.507061   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:53.507090   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:53.584040   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:53.575651    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.576367    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.577874    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.578518    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.580190    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:53.575651    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.576367    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.577874    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.578518    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.580190    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:53.584063   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:53.584076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.124239   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:56.136746   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:56.136818   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:56.165417   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:56.165442   92925 cri.go:89] found id: ""
	I1213 19:11:56.165451   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:56.165513   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.169272   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:56.169348   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:56.198281   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.198304   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:56.198309   92925 cri.go:89] found id: ""
	I1213 19:11:56.198316   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:56.198370   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.202310   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.206597   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:56.206670   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:56.233152   92925 cri.go:89] found id: ""
	I1213 19:11:56.233179   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.233189   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:56.233195   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:56.233259   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:56.263980   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:56.264000   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:56.264005   92925 cri.go:89] found id: ""
	I1213 19:11:56.264013   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:56.264071   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.268409   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.272169   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:56.272245   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:56.307136   92925 cri.go:89] found id: ""
	I1213 19:11:56.307163   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.307173   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:56.307179   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:56.307237   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:56.335595   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:56.335618   92925 cri.go:89] found id: ""
	I1213 19:11:56.335626   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:56.335684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.339317   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:56.339388   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:56.365740   92925 cri.go:89] found id: ""
	I1213 19:11:56.365763   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.365773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:56.365782   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:56.365795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:56.392684   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:56.392715   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.443884   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:56.443916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:56.470931   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:56.471007   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:56.498493   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:56.498569   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:56.594275   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:56.594325   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:56.697865   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:56.697902   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:56.710803   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:56.710833   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:56.774588   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:56.766250    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.767127    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.768759    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.769116    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.770766    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:56.766250    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.767127    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.768759    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.769116    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.770766    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:56.774608   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:56.774621   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:56.822318   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:56.822354   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:56.879404   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:56.879440   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:59.418085   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:59.429523   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:59.429599   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:59.459140   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:59.459164   92925 cri.go:89] found id: ""
	I1213 19:11:59.459173   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:59.459250   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.463131   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:59.463231   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:59.491515   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:59.491539   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:59.491544   92925 cri.go:89] found id: ""
	I1213 19:11:59.491552   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:59.491650   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.495555   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.499043   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:59.499118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:59.542670   92925 cri.go:89] found id: ""
	I1213 19:11:59.542745   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.542771   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:59.542785   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:59.542861   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:59.569926   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:59.569950   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:59.569954   92925 cri.go:89] found id: ""
	I1213 19:11:59.569962   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:59.570030   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.574242   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.578071   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:59.578177   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:59.610686   92925 cri.go:89] found id: ""
	I1213 19:11:59.610714   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.610723   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:59.610729   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:59.610789   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:59.639587   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:59.639641   92925 cri.go:89] found id: ""
	I1213 19:11:59.639659   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:59.639720   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.644316   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:59.644404   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:59.672619   92925 cri.go:89] found id: ""
	I1213 19:11:59.672644   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.672653   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:59.672663   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:59.672684   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:59.700144   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:59.700172   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:59.777808   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:59.777856   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:59.811078   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:59.811111   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:59.910789   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:59.910827   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:59.987053   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:59.975650    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.976469    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.977682    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.978310    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.979849    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:59.975650    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.976469    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.977682    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.978310    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.979849    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:00.003642   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:00.003687   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:00.194711   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:00.194803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:00.357297   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:00.357336   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:00.438487   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:00.438580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:00.454845   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:00.454880   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:00.564592   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:00.564633   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.112543   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:03.123663   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:03.123738   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:03.157514   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:03.157538   92925 cri.go:89] found id: ""
	I1213 19:12:03.157546   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:03.157601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.161756   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:03.161829   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:03.187867   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:03.187887   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:03.187892   92925 cri.go:89] found id: ""
	I1213 19:12:03.187900   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:03.187954   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.191586   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.195089   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:03.195186   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:03.227702   92925 cri.go:89] found id: ""
	I1213 19:12:03.227727   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.227736   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:03.227742   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:03.227802   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:03.254539   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:03.254561   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.254566   92925 cri.go:89] found id: ""
	I1213 19:12:03.254574   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:03.254653   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.258434   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.262232   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:03.262309   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:03.293528   92925 cri.go:89] found id: ""
	I1213 19:12:03.293552   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.293561   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:03.293567   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:03.293627   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:03.324573   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:03.324595   92925 cri.go:89] found id: ""
	I1213 19:12:03.324603   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:03.324655   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.328400   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:03.328469   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:03.354317   92925 cri.go:89] found id: ""
	I1213 19:12:03.354342   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.354351   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:03.354362   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:03.354376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:03.416520   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:03.416559   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.443937   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:03.443966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:03.520631   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:03.520669   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:03.539545   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:03.539575   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:03.609658   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:03.599495    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.600262    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.602170    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604093    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604836    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:03.599495    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.600262    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.602170    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604093    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604836    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:03.609679   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:03.609691   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:03.641994   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:03.642021   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:03.683262   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:03.683296   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:03.711455   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:03.711486   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:03.742963   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:03.742994   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:03.842936   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:03.842971   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.387950   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:06.398757   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:06.398838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:06.427281   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:06.427343   92925 cri.go:89] found id: ""
	I1213 19:12:06.427359   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:06.427424   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.431296   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:06.431370   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:06.458047   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:06.458069   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.458073   92925 cri.go:89] found id: ""
	I1213 19:12:06.458081   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:06.458138   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.461822   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.466010   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:06.466084   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:06.504515   92925 cri.go:89] found id: ""
	I1213 19:12:06.504542   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.504551   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:06.504560   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:06.504621   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:06.541478   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:06.541501   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:06.541506   92925 cri.go:89] found id: ""
	I1213 19:12:06.541514   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:06.541576   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.545645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.549634   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:06.549704   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:06.576630   92925 cri.go:89] found id: ""
	I1213 19:12:06.576698   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.576724   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:06.576744   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:06.576832   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:06.604207   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:06.604229   92925 cri.go:89] found id: ""
	I1213 19:12:06.604237   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:06.604298   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.608117   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:06.608232   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:06.634291   92925 cri.go:89] found id: ""
	I1213 19:12:06.634362   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.634379   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:06.634388   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:06.634402   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.696997   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:06.697085   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:06.756705   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:06.756741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:06.836493   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:06.836525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:06.936663   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:06.936700   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:06.949180   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:06.949212   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:07.020703   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:07.012352    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.013247    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.014825    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.015260    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.016747    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:07.012352    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.013247    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.014825    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.015260    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.016747    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:07.020728   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:07.020741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:07.052354   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:07.052383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:07.079834   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:07.079865   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:07.119690   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:07.119720   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:07.146357   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:07.146385   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:09.686883   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:09.697849   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:09.697924   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:09.724282   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:09.724307   92925 cri.go:89] found id: ""
	I1213 19:12:09.724316   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:09.724374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.727853   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:09.727929   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:09.757294   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:09.757315   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:09.757320   92925 cri.go:89] found id: ""
	I1213 19:12:09.757328   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:09.757383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.761291   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.764680   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:09.764755   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:09.791939   92925 cri.go:89] found id: ""
	I1213 19:12:09.791964   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.791974   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:09.791979   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:09.792059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:09.819349   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:09.819415   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:09.819435   92925 cri.go:89] found id: ""
	I1213 19:12:09.819460   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:09.819540   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.823580   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.827023   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:09.827138   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:09.857888   92925 cri.go:89] found id: ""
	I1213 19:12:09.857966   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.857990   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:09.858001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:09.858066   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:09.884350   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:09.884373   92925 cri.go:89] found id: ""
	I1213 19:12:09.884381   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:09.884438   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.888641   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:09.888720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:09.915592   92925 cri.go:89] found id: ""
	I1213 19:12:09.915614   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.915623   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:09.915632   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:09.915644   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:09.941582   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:09.941614   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:10.002342   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:10.002377   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:10.031301   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:10.031336   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:10.071296   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:10.071332   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:10.123567   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:10.123605   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:10.157428   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:10.157457   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:10.238347   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:10.238426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:10.334563   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:10.334598   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:10.347255   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:10.347286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:10.432160   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:10.423156    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.423973    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.425617    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.426254    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.428070    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:10.423156    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.423973    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.425617    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.426254    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.428070    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:10.432226   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:10.432252   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:12.994728   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:13.005943   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:13.006017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:13.033581   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:13.033602   92925 cri.go:89] found id: ""
	I1213 19:12:13.033610   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:13.033689   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.037439   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:13.037531   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:13.069482   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:13.069506   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:13.069511   92925 cri.go:89] found id: ""
	I1213 19:12:13.069520   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:13.069579   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.073384   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.077179   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:13.077250   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:13.117434   92925 cri.go:89] found id: ""
	I1213 19:12:13.117508   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.117525   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:13.117532   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:13.117603   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:13.151113   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:13.151191   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:13.151211   92925 cri.go:89] found id: ""
	I1213 19:12:13.151235   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:13.151330   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.155305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.159267   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:13.159375   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:13.193156   92925 cri.go:89] found id: ""
	I1213 19:12:13.193183   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.193191   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:13.193197   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:13.193303   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:13.228192   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:13.228272   92925 cri.go:89] found id: ""
	I1213 19:12:13.228304   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:13.228385   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.232149   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:13.232270   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:13.265793   92925 cri.go:89] found id: ""
	I1213 19:12:13.265868   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.265892   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:13.265914   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:13.265974   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:13.298247   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:13.298332   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:13.338944   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:13.338977   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:13.398561   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:13.398600   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:13.426862   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:13.426891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:13.526771   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:13.526807   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:13.539556   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:13.539587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:13.606738   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:13.598805    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.599569    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.600660    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.601348    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.602977    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:13.598805    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.599569    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.600660    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.601348    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.602977    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:13.606761   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:13.606777   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:13.632299   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:13.632367   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:13.681186   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:13.681224   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:13.715711   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:13.715741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:16.289974   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:16.301720   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:16.301794   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:16.333180   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:16.333203   92925 cri.go:89] found id: ""
	I1213 19:12:16.333211   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:16.333262   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.337163   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:16.337233   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:16.366808   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:16.366829   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:16.366834   92925 cri.go:89] found id: ""
	I1213 19:12:16.366841   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:16.366897   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.370643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.374381   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:16.374453   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:16.402639   92925 cri.go:89] found id: ""
	I1213 19:12:16.402663   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.402672   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:16.402678   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:16.402735   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:16.429862   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:16.429927   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:16.429948   92925 cri.go:89] found id: ""
	I1213 19:12:16.429971   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:16.430057   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.437586   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.443620   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:16.443739   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:16.468889   92925 cri.go:89] found id: ""
	I1213 19:12:16.468915   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.468933   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:16.468940   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:16.469002   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:16.497884   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:16.497952   92925 cri.go:89] found id: ""
	I1213 19:12:16.497975   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:16.498065   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.501907   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:16.502017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:16.528833   92925 cri.go:89] found id: ""
	I1213 19:12:16.528861   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.528871   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:16.528880   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:16.528891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:16.571970   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:16.572003   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:16.599399   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:16.599433   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:16.626668   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:16.626698   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:16.657476   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:16.657505   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:16.756171   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:16.756207   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:16.768558   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:16.768587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:16.841002   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:16.841041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:16.913877   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:16.913951   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:17.002296   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:16.981549    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.983800    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.984559    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.987461    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.988234    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:16.981549    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.983800    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.984559    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.987461    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.988234    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:17.002364   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:17.002385   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:17.029940   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:17.029968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.576739   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:19.587975   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:19.588041   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:19.614817   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:19.614840   92925 cri.go:89] found id: ""
	I1213 19:12:19.614848   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:19.614903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.618582   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:19.618679   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:19.651398   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.651419   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:19.651424   92925 cri.go:89] found id: ""
	I1213 19:12:19.651432   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:19.651501   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.655392   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.659059   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:19.659134   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:19.684221   92925 cri.go:89] found id: ""
	I1213 19:12:19.684247   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.684257   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:19.684264   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:19.684323   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:19.711198   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:19.711220   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:19.711226   92925 cri.go:89] found id: ""
	I1213 19:12:19.711233   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:19.711289   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.715680   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.719221   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:19.719292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:19.751237   92925 cri.go:89] found id: ""
	I1213 19:12:19.751286   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.751296   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:19.751303   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:19.751371   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:19.778300   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:19.778321   92925 cri.go:89] found id: ""
	I1213 19:12:19.778330   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:19.778413   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.782520   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:19.782614   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:19.814477   92925 cri.go:89] found id: ""
	I1213 19:12:19.814507   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.814517   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:19.814526   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:19.814558   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.855891   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:19.855922   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:19.917648   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:19.917687   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:19.949548   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:19.949574   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:19.976644   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:19.976680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:20.064988   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:20.065042   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:20.114742   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:20.114776   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:20.220028   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:20.220066   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:20.232673   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:20.232703   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:20.314099   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:20.305597    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.306343    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308133    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308739    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.310382    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:20.305597    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.306343    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308133    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308739    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.310382    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:20.314125   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:20.314142   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:20.358618   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:20.358649   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:22.884692   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:22.896642   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:22.896714   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:22.925894   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:22.925919   92925 cri.go:89] found id: ""
	I1213 19:12:22.925928   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:22.925982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.929556   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:22.929630   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:22.957310   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:22.957375   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:22.957393   92925 cri.go:89] found id: ""
	I1213 19:12:22.957419   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:22.957496   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.961230   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.964927   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:22.965122   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:22.993901   92925 cri.go:89] found id: ""
	I1213 19:12:22.993974   92925 logs.go:282] 0 containers: []
	W1213 19:12:22.994000   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:22.994012   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:22.994092   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:23.021087   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:23.021112   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:23.021117   92925 cri.go:89] found id: ""
	I1213 19:12:23.021123   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:23.021179   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.025414   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.029044   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:23.029147   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:23.054815   92925 cri.go:89] found id: ""
	I1213 19:12:23.054840   92925 logs.go:282] 0 containers: []
	W1213 19:12:23.054848   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:23.054855   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:23.054913   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:23.080286   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:23.080312   92925 cri.go:89] found id: ""
	I1213 19:12:23.080320   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:23.080407   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.084274   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:23.084375   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:23.115727   92925 cri.go:89] found id: ""
	I1213 19:12:23.115750   92925 logs.go:282] 0 containers: []
	W1213 19:12:23.115758   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:23.115767   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:23.115796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:23.194830   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:23.186405    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.187281    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.188756    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.189379    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.191250    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:23.186405    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.187281    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.188756    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.189379    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.191250    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:23.194890   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:23.194911   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:23.234766   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:23.234801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:23.282930   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:23.282966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:23.352028   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:23.352067   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:23.379340   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:23.379418   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:23.425558   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:23.425589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:23.453170   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:23.453198   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:23.484993   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:23.485089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:23.575060   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:23.575093   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:23.676623   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:23.676658   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:26.191200   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:26.202087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:26.202208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:26.237575   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:26.237607   92925 cri.go:89] found id: ""
	I1213 19:12:26.237616   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:26.237685   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.242604   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:26.242726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:26.275657   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:26.275680   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:26.275687   92925 cri.go:89] found id: ""
	I1213 19:12:26.275696   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:26.275774   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.279747   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.283677   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:26.283784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:26.312109   92925 cri.go:89] found id: ""
	I1213 19:12:26.312185   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.312219   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:26.312239   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:26.312329   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:26.342409   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:26.342432   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:26.342437   92925 cri.go:89] found id: ""
	I1213 19:12:26.342445   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:26.342500   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.346485   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.350281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:26.350365   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:26.375751   92925 cri.go:89] found id: ""
	I1213 19:12:26.375775   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.375783   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:26.375790   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:26.375864   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:26.401584   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:26.401607   92925 cri.go:89] found id: ""
	I1213 19:12:26.401614   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:26.401686   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.405294   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:26.405373   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:26.433390   92925 cri.go:89] found id: ""
	I1213 19:12:26.433467   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.433491   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:26.433507   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:26.433533   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:26.493265   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:26.493305   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:26.528279   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:26.528307   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:26.612530   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:26.612565   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:26.625201   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:26.625231   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:26.695921   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:26.686948    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.687827    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.689491    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.690111    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.691852    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:26.686948    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.687827    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.689491    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.690111    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.691852    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:26.695942   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:26.695955   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:26.721367   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:26.721436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:26.747790   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:26.747818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:26.778783   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:26.778813   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:26.875307   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:26.875341   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:26.926065   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:26.926104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.471412   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:29.482208   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:29.482279   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:29.518089   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:29.518111   92925 cri.go:89] found id: ""
	I1213 19:12:29.518120   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:29.518179   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.522151   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:29.522316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:29.550522   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:29.550548   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.550553   92925 cri.go:89] found id: ""
	I1213 19:12:29.550561   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:29.550614   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.554476   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.557855   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:29.557927   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:29.585314   92925 cri.go:89] found id: ""
	I1213 19:12:29.585337   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.585346   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:29.585352   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:29.585415   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:29.613061   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:29.613081   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:29.613087   92925 cri.go:89] found id: ""
	I1213 19:12:29.613094   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:29.613149   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.617383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.621127   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:29.621198   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:29.648388   92925 cri.go:89] found id: ""
	I1213 19:12:29.648415   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.648425   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:29.648434   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:29.648493   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:29.675800   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:29.675823   92925 cri.go:89] found id: ""
	I1213 19:12:29.675832   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:29.675885   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.679891   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:29.679964   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:29.708415   92925 cri.go:89] found id: ""
	I1213 19:12:29.708439   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.708447   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:29.708457   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:29.708469   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:29.747281   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:29.747357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.791340   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:29.791374   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:29.834406   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:29.834436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:29.861132   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:29.861162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:29.962754   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:29.962831   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:29.975698   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:29.975725   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:30.136167   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:30.136206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:30.219391   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:30.219426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:30.250060   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:30.250090   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:30.324085   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:30.315913    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.316779    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318083    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318787    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.320486    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:30.315913    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.316779    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318083    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318787    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.320486    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:30.324108   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:30.324122   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:32.849129   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:32.861076   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:32.861146   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:32.890816   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:32.890837   92925 cri.go:89] found id: ""
	I1213 19:12:32.890845   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:32.890899   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.894607   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:32.894684   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:32.925830   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:32.925856   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:32.925861   92925 cri.go:89] found id: ""
	I1213 19:12:32.925868   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:32.925921   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.929582   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.932913   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:32.932983   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:32.959171   92925 cri.go:89] found id: ""
	I1213 19:12:32.959199   92925 logs.go:282] 0 containers: []
	W1213 19:12:32.959208   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:32.959214   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:32.959319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:32.993282   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:32.993309   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:32.993315   92925 cri.go:89] found id: ""
	I1213 19:12:32.993331   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:32.993393   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.997923   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:33.002009   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:33.002111   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:33.029187   92925 cri.go:89] found id: ""
	I1213 19:12:33.029210   92925 logs.go:282] 0 containers: []
	W1213 19:12:33.029219   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:33.029225   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:33.029333   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:33.057252   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:33.057287   92925 cri.go:89] found id: ""
	I1213 19:12:33.057296   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:33.057360   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:33.061234   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:33.061340   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:33.089861   92925 cri.go:89] found id: ""
	I1213 19:12:33.089889   92925 logs.go:282] 0 containers: []
	W1213 19:12:33.089898   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:33.089907   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:33.089919   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:33.108679   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:33.108710   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:33.162722   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:33.162768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:33.227823   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:33.227861   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:33.260183   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:33.260210   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:33.286847   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:33.286872   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:33.368228   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:33.368263   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:33.475747   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:33.475786   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:33.554192   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:33.546124    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.546992    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.548557    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.549128    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.550628    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:33.546124    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.546992    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.548557    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.549128    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.550628    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:33.554212   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:33.554225   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:33.579823   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:33.579850   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:33.623777   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:33.623815   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:36.157314   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:36.168502   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:36.168576   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:36.196421   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:36.196442   92925 cri.go:89] found id: ""
	I1213 19:12:36.196451   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:36.196511   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.200568   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:36.200636   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:36.227300   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:36.227324   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:36.227331   92925 cri.go:89] found id: ""
	I1213 19:12:36.227338   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:36.227396   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.231459   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.235239   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:36.235316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:36.268611   92925 cri.go:89] found id: ""
	I1213 19:12:36.268635   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.268644   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:36.268650   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:36.268731   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:36.308479   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:36.308576   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:36.308597   92925 cri.go:89] found id: ""
	I1213 19:12:36.308642   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:36.308738   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.312547   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.316077   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:36.316189   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:36.342346   92925 cri.go:89] found id: ""
	I1213 19:12:36.342382   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.342392   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:36.342414   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:36.342496   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:36.368808   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:36.368834   92925 cri.go:89] found id: ""
	I1213 19:12:36.368844   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:36.368899   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.372705   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:36.372790   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:36.399760   92925 cri.go:89] found id: ""
	I1213 19:12:36.399796   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.399805   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:36.399817   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:36.399829   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:36.497016   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:36.497097   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:36.511432   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:36.511552   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:36.587222   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:36.577960    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.578711    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.580805    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.581572    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.583427    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:36.577960    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.578711    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.580805    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.581572    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.583427    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:36.587247   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:36.587262   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:36.630739   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:36.630774   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:36.683440   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:36.683473   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:36.751190   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:36.751241   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:36.779744   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:36.779833   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:36.806180   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:36.806206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:36.832449   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:36.832475   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:36.910859   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:36.910900   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:39.441151   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:39.452365   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:39.452439   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:39.484411   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:39.484436   92925 cri.go:89] found id: ""
	I1213 19:12:39.484444   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:39.484499   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.488316   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:39.488390   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:39.519236   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:39.519263   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:39.519268   92925 cri.go:89] found id: ""
	I1213 19:12:39.519277   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:39.519331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.523340   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.529308   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:39.529377   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:39.559339   92925 cri.go:89] found id: ""
	I1213 19:12:39.559405   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.559437   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:39.559456   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:39.559543   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:39.589737   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:39.589769   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:39.589775   92925 cri.go:89] found id: ""
	I1213 19:12:39.589783   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:39.589848   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.593976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.598330   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:39.598421   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:39.631670   92925 cri.go:89] found id: ""
	I1213 19:12:39.631699   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.631708   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:39.631714   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:39.631783   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:39.662738   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:39.662803   92925 cri.go:89] found id: ""
	I1213 19:12:39.662824   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:39.662906   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.666773   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:39.666867   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:39.695600   92925 cri.go:89] found id: ""
	I1213 19:12:39.695627   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.695637   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:39.695646   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:39.695658   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:39.787866   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:39.787904   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:39.864556   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:39.853140    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.856488    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.857226    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.858708    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.859314    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:39.853140    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.856488    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.857226    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.858708    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.859314    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:39.864580   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:39.864594   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:39.893552   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:39.893593   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:39.935040   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:39.935070   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:39.977962   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:39.977992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:40.052674   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:40.052713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:40.145597   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:40.145709   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:40.181340   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:40.181368   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:40.194929   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:40.194999   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:40.222595   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:40.222665   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:42.749068   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:42.760019   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:42.760098   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:42.790868   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:42.790891   92925 cri.go:89] found id: ""
	I1213 19:12:42.790898   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:42.790953   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.794682   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:42.794770   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:42.823001   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:42.823024   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:42.823029   92925 cri.go:89] found id: ""
	I1213 19:12:42.823036   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:42.823102   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.826966   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.830581   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:42.830667   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:42.857298   92925 cri.go:89] found id: ""
	I1213 19:12:42.857325   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.857334   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:42.857340   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:42.857402   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:42.888499   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:42.888524   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:42.888528   92925 cri.go:89] found id: ""
	I1213 19:12:42.888535   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:42.888601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.894724   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.898823   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:42.898944   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:42.925225   92925 cri.go:89] found id: ""
	I1213 19:12:42.925262   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.925271   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:42.925277   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:42.925363   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:42.954151   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:42.954186   92925 cri.go:89] found id: ""
	I1213 19:12:42.954195   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:42.954262   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.958191   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:42.958256   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:42.997632   92925 cri.go:89] found id: ""
	I1213 19:12:42.997699   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.997722   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:42.997738   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:42.997750   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:43.044934   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:43.044968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:43.130707   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:43.130787   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:43.162064   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:43.162196   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:43.174781   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:43.174807   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:43.248282   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:43.239057    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.239785    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.241456    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.242060    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.243778    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:43.239057    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.239785    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.241456    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.242060    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.243778    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:43.248309   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:43.248322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:43.292697   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:43.292729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:43.326878   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:43.326906   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:43.402321   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:43.402356   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:43.434630   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:43.434662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:43.547901   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:43.547940   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.074896   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:46.086088   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:46.086156   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:46.138954   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.138977   92925 cri.go:89] found id: ""
	I1213 19:12:46.138985   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:46.139041   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.142934   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:46.143008   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:46.167983   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:46.168008   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:46.168014   92925 cri.go:89] found id: ""
	I1213 19:12:46.168022   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:46.168083   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.172203   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.176085   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:46.176164   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:46.206474   92925 cri.go:89] found id: ""
	I1213 19:12:46.206501   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.206509   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:46.206515   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:46.206572   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:46.232990   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:46.233047   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:46.233052   92925 cri.go:89] found id: ""
	I1213 19:12:46.233059   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:46.233121   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.236960   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.241098   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:46.241171   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:46.277846   92925 cri.go:89] found id: ""
	I1213 19:12:46.277872   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.277881   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:46.277886   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:46.277945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:46.306293   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:46.306316   92925 cri.go:89] found id: ""
	I1213 19:12:46.306324   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:46.306383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.310146   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:46.310220   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:46.337703   92925 cri.go:89] found id: ""
	I1213 19:12:46.337728   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.337737   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:46.337746   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:46.337757   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:46.433354   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:46.433391   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:46.446062   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:46.446089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.474866   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:46.474894   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:46.518894   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:46.518972   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:46.584190   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:46.584221   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:46.612728   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:46.612798   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:46.693365   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:46.693401   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:46.730005   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:46.730036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:46.805821   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:46.797250    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.797857    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799401    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799906    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.801867    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:46.797250    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.797857    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799401    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799906    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.801867    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:46.805844   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:46.805858   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:46.849142   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:46.849180   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.377325   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:49.388007   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:49.388073   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:49.414745   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:49.414768   92925 cri.go:89] found id: ""
	I1213 19:12:49.414777   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:49.414831   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.418502   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:49.418579   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:49.443751   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:49.443772   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:49.443777   92925 cri.go:89] found id: ""
	I1213 19:12:49.443784   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:49.443864   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.447524   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.450957   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:49.451025   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:49.478284   92925 cri.go:89] found id: ""
	I1213 19:12:49.478309   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.478318   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:49.478324   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:49.478383   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:49.506581   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:49.506604   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:49.506609   92925 cri.go:89] found id: ""
	I1213 19:12:49.506617   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:49.506673   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.513976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.518489   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:49.518567   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:49.545961   92925 cri.go:89] found id: ""
	I1213 19:12:49.545986   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.545995   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:49.546001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:49.546072   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:49.579946   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.579974   92925 cri.go:89] found id: ""
	I1213 19:12:49.579983   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:49.580036   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.583648   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:49.583726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:49.610201   92925 cri.go:89] found id: ""
	I1213 19:12:49.610278   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.610294   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:49.610304   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:49.610321   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:49.682958   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:49.682995   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:49.716028   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:49.716058   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:49.744220   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:49.744248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:49.783347   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:49.783379   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:49.826736   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:49.826770   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:49.860737   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:49.860767   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.894176   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:49.894206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:49.978486   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:49.978525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:50.088530   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:50.088567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:50.107858   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:50.107886   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:50.186950   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:50.178748    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.179306    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.180827    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.181343    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.182902    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:50.178748    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.179306    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.180827    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.181343    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.182902    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:52.687879   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:52.700111   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:52.700185   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:52.727611   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:52.727635   92925 cri.go:89] found id: ""
	I1213 19:12:52.727643   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:52.727699   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.732611   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:52.732683   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:52.760331   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:52.760355   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:52.760361   92925 cri.go:89] found id: ""
	I1213 19:12:52.760369   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:52.760424   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.764203   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.767807   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:52.767880   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:52.794453   92925 cri.go:89] found id: ""
	I1213 19:12:52.794528   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.794552   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:52.794571   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:52.794662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:52.824938   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:52.825046   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:52.825077   92925 cri.go:89] found id: ""
	I1213 19:12:52.825108   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:52.825170   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.828865   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.832644   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:52.832718   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:52.860489   92925 cri.go:89] found id: ""
	I1213 19:12:52.860512   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.860521   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:52.860527   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:52.860588   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:52.886828   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:52.886862   92925 cri.go:89] found id: ""
	I1213 19:12:52.886872   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:52.886940   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.890986   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:52.891106   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:52.917681   92925 cri.go:89] found id: ""
	I1213 19:12:52.917749   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.917776   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:52.917799   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:52.917837   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:52.948506   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:52.948535   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:52.977936   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:52.977963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:53.041212   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:53.041249   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:53.080162   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:53.080189   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:53.174852   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:53.174897   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:53.273766   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:53.273802   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:53.285893   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:53.285925   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:53.352966   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:53.343677    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345158    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345928    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347424    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347925    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:53.343677    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345158    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345928    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347424    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347925    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:53.352990   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:53.353032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:53.391432   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:53.391464   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:53.451329   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:53.451363   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:55.977809   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:55.993375   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:55.993492   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:56.026972   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:56.026993   92925 cri.go:89] found id: ""
	I1213 19:12:56.027001   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:56.027059   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.031128   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:56.031204   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:56.058936   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:56.058958   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:56.058963   92925 cri.go:89] found id: ""
	I1213 19:12:56.058971   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:56.059024   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.062862   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.066757   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:56.066858   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:56.096088   92925 cri.go:89] found id: ""
	I1213 19:12:56.096112   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.096121   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:56.096134   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:56.096196   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:56.138653   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:56.138678   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:56.138683   92925 cri.go:89] found id: ""
	I1213 19:12:56.138691   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:56.138748   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.142767   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.146336   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:56.146413   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:56.176996   92925 cri.go:89] found id: ""
	I1213 19:12:56.177098   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.177115   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:56.177122   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:56.177191   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:56.206318   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:56.206341   92925 cri.go:89] found id: ""
	I1213 19:12:56.206350   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:56.206405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.210085   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:56.210208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:56.240242   92925 cri.go:89] found id: ""
	I1213 19:12:56.240269   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.240278   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:56.240287   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:56.240299   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:56.268772   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:56.268800   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:56.282265   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:56.282293   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:56.334697   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:56.334731   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:56.419986   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:56.420074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:56.466391   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:56.466421   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:56.578289   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:56.578327   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:56.657266   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:56.648227    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.649364    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.650885    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.651401    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.653076    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:56.648227    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.649364    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.650885    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.651401    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.653076    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:56.657289   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:56.657302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:56.685603   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:56.685631   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:56.732451   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:56.732487   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:56.807034   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:56.807068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:59.335877   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:59.346983   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:59.347053   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:59.375213   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:59.375241   92925 cri.go:89] found id: ""
	I1213 19:12:59.375250   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:59.375308   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.379246   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:59.379319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:59.406052   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:59.406073   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:59.406078   92925 cri.go:89] found id: ""
	I1213 19:12:59.406085   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:59.406142   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.409969   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.413744   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:59.413813   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:59.440031   92925 cri.go:89] found id: ""
	I1213 19:12:59.440057   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.440066   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:59.440072   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:59.440131   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:59.470750   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:59.470770   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:59.470775   92925 cri.go:89] found id: ""
	I1213 19:12:59.470782   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:59.470836   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.474671   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.478148   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:59.478230   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:59.532301   92925 cri.go:89] found id: ""
	I1213 19:12:59.532334   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.532344   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:59.532350   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:59.532423   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:59.558719   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:59.558742   92925 cri.go:89] found id: ""
	I1213 19:12:59.558750   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:59.558814   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.562460   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:59.562534   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:59.588851   92925 cri.go:89] found id: ""
	I1213 19:12:59.588916   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.588942   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:59.588964   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:59.589031   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:59.665993   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:59.666032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:59.712805   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:59.712839   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:59.725635   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:59.725688   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:59.797796   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:59.790093    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.790845    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.791906    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.792472    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.794170    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:59.790093    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.790845    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.791906    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.792472    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.794170    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:59.797819   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:59.797831   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:59.825855   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:59.825886   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:59.864251   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:59.864286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:59.890125   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:59.890151   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:59.981337   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:59.981387   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:00.239751   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:00.239799   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:00.366187   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:00.368005   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:02.909028   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:02.919617   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:02.919732   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:02.946548   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:02.946613   92925 cri.go:89] found id: ""
	I1213 19:13:02.946629   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:02.946696   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.950448   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:02.950542   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:02.975550   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:02.975572   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:02.975577   92925 cri.go:89] found id: ""
	I1213 19:13:02.975585   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:02.975645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.979406   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.984704   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:02.984818   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:03.017288   92925 cri.go:89] found id: ""
	I1213 19:13:03.017311   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.017320   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:03.017334   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:03.017393   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:03.048824   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:03.048850   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:03.048857   92925 cri.go:89] found id: ""
	I1213 19:13:03.048864   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:03.048919   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.052630   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.056397   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:03.056521   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:03.088050   92925 cri.go:89] found id: ""
	I1213 19:13:03.088123   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.088146   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:03.088165   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:03.088271   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:03.119709   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:03.119778   92925 cri.go:89] found id: ""
	I1213 19:13:03.119801   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:03.119889   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.127122   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:03.127274   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:03.162913   92925 cri.go:89] found id: ""
	I1213 19:13:03.162936   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.162945   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:03.162953   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:03.162966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:03.207543   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:03.207579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:03.279537   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:03.279575   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:03.314034   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:03.314062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:03.394532   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:03.394567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:03.428318   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:03.428351   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:03.528148   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:03.528187   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:03.626750   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:03.618493    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.619154    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.620764    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.621367    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.622889    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:03.618493    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.619154    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.620764    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.621367    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.622889    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:03.626775   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:03.626788   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:03.685480   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:03.685519   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:03.713856   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:03.713883   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:03.734590   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:03.734620   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:06.266879   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:06.277733   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:06.277799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:06.305175   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:06.305196   92925 cri.go:89] found id: ""
	I1213 19:13:06.305204   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:06.305258   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.308850   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:06.308928   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:06.335153   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:06.335177   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:06.335182   92925 cri.go:89] found id: ""
	I1213 19:13:06.335189   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:06.335246   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.338903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.342418   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:06.342493   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:06.372604   92925 cri.go:89] found id: ""
	I1213 19:13:06.372632   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.372641   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:06.372646   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:06.372707   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:06.402642   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:06.402670   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:06.402675   92925 cri.go:89] found id: ""
	I1213 19:13:06.402682   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:06.402740   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.406787   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.411254   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:06.411335   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:06.437659   92925 cri.go:89] found id: ""
	I1213 19:13:06.437736   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.437751   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:06.437758   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:06.437829   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:06.466702   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:06.466725   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:06.466730   92925 cri.go:89] found id: ""
	I1213 19:13:06.466737   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:06.466793   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.470567   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.474150   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:06.474224   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:06.501494   92925 cri.go:89] found id: ""
	I1213 19:13:06.501569   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.501594   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:06.501617   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:06.501662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:06.544779   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:06.544813   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:06.609379   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:06.609413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:06.637668   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:06.637698   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:06.664078   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:06.664105   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:06.709192   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:06.709225   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:06.737814   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:06.737845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:06.810267   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:06.810302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:06.841843   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:06.841871   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:06.938739   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:06.938776   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:06.951386   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:06.951414   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:07.032986   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:07.025075    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.025642    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027282    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027955    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.029566    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:07.025075    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.025642    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027282    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027955    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.029566    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:07.033040   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:07.033053   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:09.558493   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:09.570604   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:09.570681   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:09.598108   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:09.598133   92925 cri.go:89] found id: ""
	I1213 19:13:09.598141   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:09.598197   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.602596   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:09.602673   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:09.629705   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:09.629727   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:09.629733   92925 cri.go:89] found id: ""
	I1213 19:13:09.629741   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:09.629798   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.634280   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.637817   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:09.637895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:09.665414   92925 cri.go:89] found id: ""
	I1213 19:13:09.665438   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.665447   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:09.665453   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:09.665509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:09.691729   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:09.691754   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:09.691759   92925 cri.go:89] found id: ""
	I1213 19:13:09.691766   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:09.691850   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.696064   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.700204   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:09.700308   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:09.732154   92925 cri.go:89] found id: ""
	I1213 19:13:09.732181   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.732190   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:09.732196   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:09.732277   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:09.760821   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:09.760844   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:09.760849   92925 cri.go:89] found id: ""
	I1213 19:13:09.760856   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:09.760918   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.764697   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.768225   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:09.768299   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:09.796678   92925 cri.go:89] found id: ""
	I1213 19:13:09.796748   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.796773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:09.796797   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:09.796844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:09.892500   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:09.892536   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:09.905527   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:09.905557   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:09.964751   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:09.964785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:10.026858   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:10.026896   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:10.095709   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:10.095747   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:10.135797   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:10.135834   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:10.207467   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:10.198321    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.199090    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.200887    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.201755    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.202624    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:10.198321    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.199090    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.200887    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.201755    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.202624    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:10.207502   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:10.207515   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:10.233202   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:10.233298   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:10.259818   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:10.259845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:10.286455   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:10.286482   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:10.359430   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:10.359465   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:12.894266   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:12.905675   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:12.905773   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:12.932239   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:12.932259   92925 cri.go:89] found id: ""
	I1213 19:13:12.932267   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:12.932320   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.935869   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:12.935938   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:12.961758   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:12.961778   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:12.961782   92925 cri.go:89] found id: ""
	I1213 19:13:12.961789   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:12.961846   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.965449   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.968967   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:12.969071   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:13.001173   92925 cri.go:89] found id: ""
	I1213 19:13:13.001203   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.001213   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:13.001219   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:13.001333   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:13.029728   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:13.029751   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:13.029756   92925 cri.go:89] found id: ""
	I1213 19:13:13.029764   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:13.029818   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.033632   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.037474   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:13.037598   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:13.064000   92925 cri.go:89] found id: ""
	I1213 19:13:13.064025   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.064034   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:13.064040   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:13.064151   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:13.092827   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:13.092847   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:13.092852   92925 cri.go:89] found id: ""
	I1213 19:13:13.092859   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:13.092913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.097637   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.102128   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:13.102195   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:13.132820   92925 cri.go:89] found id: ""
	I1213 19:13:13.132891   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.132912   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:13.132934   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:13.132976   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:13.200851   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:13.200889   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:13.232573   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:13.232603   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:13.325521   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:13.325556   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:13.338293   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:13.338324   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:13.369921   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:13.369950   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:13.416445   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:13.416477   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:13.443214   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:13.443243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:13.468415   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:13.468448   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:13.553200   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:13.553248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:13.596683   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:13.596717   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:13.678127   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:13.669907    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.670748    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672392    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672709    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.674262    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:13.669907    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.670748    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672392    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672709    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.674262    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:13.678150   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:13.678167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.227377   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:16.238613   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:16.238685   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:16.271628   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:16.271652   92925 cri.go:89] found id: ""
	I1213 19:13:16.271661   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:16.271717   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.275571   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:16.275645   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:16.304819   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:16.304843   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.304848   92925 cri.go:89] found id: ""
	I1213 19:13:16.304856   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:16.304911   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.308802   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.312668   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:16.312741   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:16.347113   92925 cri.go:89] found id: ""
	I1213 19:13:16.347137   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.347146   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:16.347153   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:16.347209   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:16.380339   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:16.380362   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:16.380368   92925 cri.go:89] found id: ""
	I1213 19:13:16.380376   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:16.380433   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.383986   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.387756   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:16.387876   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:16.419309   92925 cri.go:89] found id: ""
	I1213 19:13:16.419344   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.419353   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:16.419359   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:16.419427   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:16.447987   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:16.448019   92925 cri.go:89] found id: ""
	I1213 19:13:16.448028   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:16.448093   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.452467   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:16.452551   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:16.478206   92925 cri.go:89] found id: ""
	I1213 19:13:16.478271   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.478298   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:16.478319   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:16.478361   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:16.505859   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:16.505891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:16.547050   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:16.547085   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.591041   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:16.591074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:16.659418   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:16.659502   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:16.686174   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:16.686202   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:16.763753   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:16.763792   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:16.795967   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:16.795996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:16.909202   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:16.909246   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:16.921936   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:16.921962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:16.996415   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:16.987820    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.988740    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990501    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990844    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.992387    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:16.987820    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.988740    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990501    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990844    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.992387    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:16.996438   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:16.996452   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:19.525182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:19.536170   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:19.536246   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:19.563344   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:19.563368   92925 cri.go:89] found id: ""
	I1213 19:13:19.563377   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:19.563432   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.567191   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:19.567263   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:19.594906   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:19.594926   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:19.594936   92925 cri.go:89] found id: ""
	I1213 19:13:19.594944   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:19.595012   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.599420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.603163   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:19.603240   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:19.636656   92925 cri.go:89] found id: ""
	I1213 19:13:19.636681   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.636690   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:19.636696   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:19.636753   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:19.667204   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:19.667274   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:19.667292   92925 cri.go:89] found id: ""
	I1213 19:13:19.667316   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:19.667395   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.671184   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.674972   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:19.675041   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:19.704947   92925 cri.go:89] found id: ""
	I1213 19:13:19.704971   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.704980   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:19.704988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:19.705073   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:19.730669   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:19.730691   92925 cri.go:89] found id: ""
	I1213 19:13:19.730699   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:19.730771   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.735384   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:19.735477   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:19.760611   92925 cri.go:89] found id: ""
	I1213 19:13:19.760634   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.760643   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:19.760669   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:19.760686   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:19.788592   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:19.788621   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:19.882694   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:19.882730   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:19.954514   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:19.946675    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.947253    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.948589    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.949210    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.950900    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:19.946675    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.947253    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.948589    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.949210    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.950900    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:19.954535   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:19.954550   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:19.980616   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:19.980694   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:20.035895   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:20.035930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:20.104716   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:20.104768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:20.199665   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:20.199701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:20.234652   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:20.234680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:20.248416   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:20.248444   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:20.296588   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:20.296624   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:22.824017   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:22.838193   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:22.838267   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:22.874481   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:22.874503   92925 cri.go:89] found id: ""
	I1213 19:13:22.874512   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:22.874578   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.878378   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:22.878467   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:22.907053   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:22.907075   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:22.907079   92925 cri.go:89] found id: ""
	I1213 19:13:22.907086   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:22.907143   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.911144   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.914933   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:22.915007   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:22.942646   92925 cri.go:89] found id: ""
	I1213 19:13:22.942714   92925 logs.go:282] 0 containers: []
	W1213 19:13:22.942729   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:22.942736   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:22.942797   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:22.969713   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:22.969735   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:22.969740   92925 cri.go:89] found id: ""
	I1213 19:13:22.969748   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:22.969804   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.973708   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.977426   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:22.977514   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:23.007912   92925 cri.go:89] found id: ""
	I1213 19:13:23.007939   92925 logs.go:282] 0 containers: []
	W1213 19:13:23.007948   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:23.007955   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:23.008018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:23.040260   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:23.040284   92925 cri.go:89] found id: ""
	I1213 19:13:23.040293   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:23.040348   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:23.044273   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:23.044348   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:23.073414   92925 cri.go:89] found id: ""
	I1213 19:13:23.073445   92925 logs.go:282] 0 containers: []
	W1213 19:13:23.073454   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:23.073466   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:23.073478   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:23.147486   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:23.147526   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:23.180397   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:23.180426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:23.262279   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:23.253482    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.254529    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.255324    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.256834    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.257439    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:23.253482    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.254529    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.255324    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.256834    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.257439    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:23.262302   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:23.262318   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:23.288912   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:23.288942   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:23.328328   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:23.328366   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:23.421984   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:23.422020   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:23.524961   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:23.524997   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:23.542790   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:23.542821   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:23.591486   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:23.591522   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:23.621748   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:23.621777   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.152673   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:26.164673   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:26.164740   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:26.192010   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:26.192031   92925 cri.go:89] found id: ""
	I1213 19:13:26.192040   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:26.192095   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.195849   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:26.195918   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:26.224593   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:26.224657   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:26.224677   92925 cri.go:89] found id: ""
	I1213 19:13:26.224702   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:26.224772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.228545   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.231970   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:26.232086   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:26.259044   92925 cri.go:89] found id: ""
	I1213 19:13:26.259066   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.259075   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:26.259080   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:26.259137   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:26.287771   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:26.287793   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:26.287798   92925 cri.go:89] found id: ""
	I1213 19:13:26.287805   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:26.287861   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.293156   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.296722   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:26.296805   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:26.323701   92925 cri.go:89] found id: ""
	I1213 19:13:26.323731   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.323746   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:26.323753   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:26.323820   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:26.350119   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.350137   92925 cri.go:89] found id: ""
	I1213 19:13:26.350145   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:26.350199   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.353849   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:26.353916   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:26.380009   92925 cri.go:89] found id: ""
	I1213 19:13:26.380035   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.380044   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:26.380053   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:26.380065   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:26.438029   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:26.438062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:26.475066   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:26.475096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:26.507857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:26.507887   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:26.521466   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:26.521493   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:26.565942   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:26.565983   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:26.634647   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:26.634680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.662943   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:26.662972   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:26.737712   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:26.737749   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:26.840754   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:26.840792   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:26.911511   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:26.903881    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.904637    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906164    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906441    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.907906    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:26.903881    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.904637    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906164    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906441    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.907906    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:26.911534   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:26.911547   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.438403   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:29.449664   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:29.449742   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:29.477323   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.477342   92925 cri.go:89] found id: ""
	I1213 19:13:29.477351   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:29.477405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.480946   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:29.481052   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:29.515446   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:29.515469   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:29.515473   92925 cri.go:89] found id: ""
	I1213 19:13:29.515480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:29.515537   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.520209   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.523894   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:29.523994   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:29.550207   92925 cri.go:89] found id: ""
	I1213 19:13:29.550232   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.550242   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:29.550272   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:29.550349   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:29.576154   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:29.576177   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:29.576182   92925 cri.go:89] found id: ""
	I1213 19:13:29.576195   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:29.576267   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.580154   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.583801   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:29.583876   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:29.613771   92925 cri.go:89] found id: ""
	I1213 19:13:29.613795   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.613805   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:29.613810   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:29.613872   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:29.640080   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:29.640103   92925 cri.go:89] found id: ""
	I1213 19:13:29.640112   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:29.640167   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.643810   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:29.643883   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:29.674496   92925 cri.go:89] found id: ""
	I1213 19:13:29.674567   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.674583   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:29.674592   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:29.674616   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.704354   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:29.704383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:29.760688   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:29.760724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:29.789616   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:29.789644   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:29.817300   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:29.817328   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:29.848838   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:29.848866   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:29.949492   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:29.949527   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:30.081487   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:30.081528   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:30.170948   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:30.170989   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:30.251666   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:30.251705   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:30.265404   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:30.265433   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:30.340984   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:30.332491    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.333283    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335347    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335760    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.337330    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:30.332491    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.333283    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335347    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335760    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.337330    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:32.841244   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:32.851830   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:32.851904   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:32.878262   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:32.878282   92925 cri.go:89] found id: ""
	I1213 19:13:32.878290   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:32.878345   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.881794   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:32.881871   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:32.908784   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:32.908807   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:32.908812   92925 cri.go:89] found id: ""
	I1213 19:13:32.908819   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:32.908877   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.913113   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.916615   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:32.916713   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:32.945436   92925 cri.go:89] found id: ""
	I1213 19:13:32.945460   92925 logs.go:282] 0 containers: []
	W1213 19:13:32.945468   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:32.945474   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:32.945532   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:32.972389   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:32.972409   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:32.972414   92925 cri.go:89] found id: ""
	I1213 19:13:32.972421   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:32.972496   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.976105   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.979491   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:32.979558   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:33.013568   92925 cri.go:89] found id: ""
	I1213 19:13:33.013590   92925 logs.go:282] 0 containers: []
	W1213 19:13:33.013598   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:33.013604   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:33.013662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:33.041534   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:33.041557   92925 cri.go:89] found id: ""
	I1213 19:13:33.041566   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:33.041622   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:33.045294   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:33.045445   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:33.074126   92925 cri.go:89] found id: ""
	I1213 19:13:33.074196   92925 logs.go:282] 0 containers: []
	W1213 19:13:33.074224   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:33.074248   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:33.074274   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:33.108085   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:33.108112   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:33.196053   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:33.196096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:33.238729   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:33.238801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:33.334220   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:33.334258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:33.347401   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:33.347431   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:33.415328   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:33.415362   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:33.444593   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:33.444672   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:33.519042   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:33.509468    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.510273    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.511953    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.512620    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.513636    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:33.509468    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.510273    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.511953    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.512620    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.513636    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:33.519066   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:33.519078   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:33.546564   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:33.546593   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:33.588382   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:33.588418   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.135267   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:36.146588   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:36.146662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:36.173719   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:36.173741   92925 cri.go:89] found id: ""
	I1213 19:13:36.173750   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:36.173821   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.177610   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:36.177680   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:36.204513   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:36.204536   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.204540   92925 cri.go:89] found id: ""
	I1213 19:13:36.204548   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:36.204602   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.208516   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.211831   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:36.211901   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:36.243167   92925 cri.go:89] found id: ""
	I1213 19:13:36.243194   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.243205   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:36.243211   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:36.243271   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:36.272787   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:36.272812   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:36.272817   92925 cri.go:89] found id: ""
	I1213 19:13:36.272825   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:36.272880   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.276627   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.280060   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:36.280182   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:36.309203   92925 cri.go:89] found id: ""
	I1213 19:13:36.309231   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.309242   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:36.309248   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:36.309310   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:36.342531   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:36.342554   92925 cri.go:89] found id: ""
	I1213 19:13:36.342563   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:36.342631   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.346318   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:36.346392   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:36.374406   92925 cri.go:89] found id: ""
	I1213 19:13:36.374442   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.374467   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:36.374485   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:36.374497   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:36.474302   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:36.474340   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:36.557406   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:36.549415    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.550022    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551319    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551900    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.553579    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:36.549415    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.550022    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551319    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551900    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.553579    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:36.557430   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:36.557443   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:36.583387   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:36.583415   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:36.623378   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:36.623413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.666931   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:36.666964   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:36.696482   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:36.696513   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:36.730677   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:36.730708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:36.743357   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:36.743386   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:36.813864   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:36.813900   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:36.848686   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:36.848716   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:39.433464   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:39.444066   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:39.444136   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:39.471666   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:39.471686   92925 cri.go:89] found id: ""
	I1213 19:13:39.471693   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:39.471753   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.475549   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:39.475641   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:39.505541   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:39.505615   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:39.505645   92925 cri.go:89] found id: ""
	I1213 19:13:39.505667   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:39.505752   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.511310   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.515781   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:39.515898   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:39.545256   92925 cri.go:89] found id: ""
	I1213 19:13:39.545290   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.545300   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:39.545306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:39.545379   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:39.576057   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:39.576080   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:39.576085   92925 cri.go:89] found id: ""
	I1213 19:13:39.576092   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:39.576146   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.580177   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.584087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:39.584160   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:39.610819   92925 cri.go:89] found id: ""
	I1213 19:13:39.610843   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.610863   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:39.610871   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:39.610929   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:39.638458   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:39.638481   92925 cri.go:89] found id: ""
	I1213 19:13:39.638503   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:39.638564   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.642537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:39.642610   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:39.670872   92925 cri.go:89] found id: ""
	I1213 19:13:39.670951   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.670975   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:39.670998   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:39.671043   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:39.774702   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:39.774743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:39.846826   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:39.837968    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.838545    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.840574    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.841359    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.842988    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:39.837968    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.838545    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.840574    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.841359    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.842988    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:39.846849   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:39.846862   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:39.892712   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:39.892743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:39.960690   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:39.960729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:40.022528   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:40.022560   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:40.107424   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:40.107461   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:40.149433   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:40.149472   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:40.162446   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:40.162479   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:40.191980   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:40.192009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:40.239148   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:40.239228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:42.771936   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:42.782654   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:42.782726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:42.808850   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:42.808869   92925 cri.go:89] found id: ""
	I1213 19:13:42.808877   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:42.808938   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.812682   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:42.812753   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:42.840980   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:42.841072   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:42.841097   92925 cri.go:89] found id: ""
	I1213 19:13:42.841122   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:42.841210   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.844946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.848726   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:42.848811   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:42.888597   92925 cri.go:89] found id: ""
	I1213 19:13:42.888663   92925 logs.go:282] 0 containers: []
	W1213 19:13:42.888688   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:42.888707   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:42.888791   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:42.916253   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:42.916323   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:42.916341   92925 cri.go:89] found id: ""
	I1213 19:13:42.916364   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:42.916443   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.920031   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.923493   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:42.923565   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:42.950967   92925 cri.go:89] found id: ""
	I1213 19:13:42.950991   92925 logs.go:282] 0 containers: []
	W1213 19:13:42.950999   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:42.951005   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:42.951062   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:42.977861   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:42.977884   92925 cri.go:89] found id: ""
	I1213 19:13:42.977892   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:42.977946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.985150   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:42.985252   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:43.014767   92925 cri.go:89] found id: ""
	I1213 19:13:43.014794   92925 logs.go:282] 0 containers: []
	W1213 19:13:43.014803   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:43.014813   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:43.014826   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:43.089031   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:43.089070   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:43.152812   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:43.152840   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:43.253685   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:43.253720   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:43.268102   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:43.268130   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:43.342529   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:43.333442    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.333905    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.335923    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.336467    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.338397    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:43.333442    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.333905    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.335923    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.336467    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.338397    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:43.342553   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:43.342566   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:43.383957   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:43.383996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:43.431627   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:43.431662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:43.504349   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:43.504386   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:43.541135   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:43.541167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:43.570288   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:43.570315   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.101243   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:46.114537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:46.114605   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:46.142285   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:46.142310   92925 cri.go:89] found id: ""
	I1213 19:13:46.142319   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:46.142374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.146198   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:46.146275   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:46.172413   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:46.172485   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:46.172504   92925 cri.go:89] found id: ""
	I1213 19:13:46.172529   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:46.172649   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.176629   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.180398   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:46.180514   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:46.208892   92925 cri.go:89] found id: ""
	I1213 19:13:46.208925   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.208934   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:46.208942   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:46.209074   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:46.237365   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:46.237388   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:46.237394   92925 cri.go:89] found id: ""
	I1213 19:13:46.237401   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:46.237458   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.241815   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.245384   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:46.245482   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:46.272996   92925 cri.go:89] found id: ""
	I1213 19:13:46.273063   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.273072   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:46.273078   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:46.273160   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:46.302629   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.302654   92925 cri.go:89] found id: ""
	I1213 19:13:46.302663   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:46.302737   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.306762   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:46.306861   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:46.337280   92925 cri.go:89] found id: ""
	I1213 19:13:46.337346   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.337369   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:46.337384   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:46.337395   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:46.349174   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:46.349204   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:46.419942   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:46.411077    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.411612    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413348    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413991    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.415827    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:46.411077    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.411612    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413348    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413991    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.415827    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:46.419977   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:46.419993   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:46.446859   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:46.446885   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:46.487087   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:46.487124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:46.547232   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:46.547267   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:46.574826   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:46.574854   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.602584   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:46.602609   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:46.640086   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:46.640117   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:46.740777   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:46.740818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:46.812315   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:46.812357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:49.395199   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:49.405934   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:49.406009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:49.433789   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:49.433810   92925 cri.go:89] found id: ""
	I1213 19:13:49.433827   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:49.433883   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.437578   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:49.437651   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:49.471711   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:49.471734   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:49.471740   92925 cri.go:89] found id: ""
	I1213 19:13:49.471748   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:49.471801   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.475461   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.479094   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:49.479168   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:49.505391   92925 cri.go:89] found id: ""
	I1213 19:13:49.505417   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.505426   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:49.505433   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:49.505488   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:49.540863   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:49.540890   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:49.540895   92925 cri.go:89] found id: ""
	I1213 19:13:49.540903   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:49.540960   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.544771   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.548451   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:49.548524   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:49.575402   92925 cri.go:89] found id: ""
	I1213 19:13:49.575428   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.575436   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:49.575442   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:49.575501   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:49.605123   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:49.605143   92925 cri.go:89] found id: ""
	I1213 19:13:49.605151   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:49.605211   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.608919   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:49.609061   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:49.637050   92925 cri.go:89] found id: ""
	I1213 19:13:49.637075   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.637084   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:49.637093   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:49.637105   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:49.744000   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:49.744048   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:49.811345   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:49.802050    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.802444    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805468    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805922    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.807507    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:49.802050    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.802444    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805468    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805922    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.807507    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:49.811370   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:49.811384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:49.852043   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:49.852081   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:49.896314   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:49.896349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:49.924211   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:49.924240   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:50.006219   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:50.006263   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:50.039895   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:50.039978   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:50.054629   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:50.054656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:50.084937   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:50.084966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:50.159510   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:50.159553   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:52.688326   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:52.699486   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:52.699554   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:52.726195   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:52.726216   92925 cri.go:89] found id: ""
	I1213 19:13:52.726224   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:52.726280   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.730715   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:52.730785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:52.756911   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:52.756933   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:52.756938   92925 cri.go:89] found id: ""
	I1213 19:13:52.756946   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:52.757069   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.760788   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.764452   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:52.764551   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:52.790658   92925 cri.go:89] found id: ""
	I1213 19:13:52.790732   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.790749   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:52.790756   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:52.790816   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:52.818365   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:52.818388   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:52.818394   92925 cri.go:89] found id: ""
	I1213 19:13:52.818402   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:52.818477   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.822460   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.826054   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:52.826130   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:52.853218   92925 cri.go:89] found id: ""
	I1213 19:13:52.853245   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.853256   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:52.853262   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:52.853321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:52.879712   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:52.879736   92925 cri.go:89] found id: ""
	I1213 19:13:52.879744   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:52.879798   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.883563   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:52.883639   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:52.910499   92925 cri.go:89] found id: ""
	I1213 19:13:52.910526   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.910535   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:52.910545   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:52.910577   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:52.990183   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:52.990219   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:53.026776   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:53.026805   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:53.118043   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:53.107629    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.110332    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.111160    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.112144    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.113182    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:53.107629    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.110332    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.111160    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.112144    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.113182    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:53.118090   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:53.118141   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:53.160995   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:53.161190   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:53.204763   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:53.204795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:53.270772   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:53.270810   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:53.370857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:53.370895   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:53.383046   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:53.383074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:53.410648   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:53.410684   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:53.439739   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:53.439768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:55.970243   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:55.981613   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:55.981689   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:56.018614   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:56.018637   92925 cri.go:89] found id: ""
	I1213 19:13:56.018647   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:56.018707   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.022914   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:56.022990   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:56.056158   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:56.056182   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:56.056187   92925 cri.go:89] found id: ""
	I1213 19:13:56.056194   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:56.056275   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.061504   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.065201   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:56.065281   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:56.094861   92925 cri.go:89] found id: ""
	I1213 19:13:56.094887   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.094896   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:56.094903   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:56.094982   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:56.133165   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:56.133240   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:56.133260   92925 cri.go:89] found id: ""
	I1213 19:13:56.133291   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:56.133356   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.137225   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.140713   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:56.140785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:56.168013   92925 cri.go:89] found id: ""
	I1213 19:13:56.168039   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.168048   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:56.168055   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:56.168118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:56.196793   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:56.196867   92925 cri.go:89] found id: ""
	I1213 19:13:56.196876   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:56.196935   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.200591   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:56.200672   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:56.227851   92925 cri.go:89] found id: ""
	I1213 19:13:56.227877   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.227887   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:56.227896   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:56.227908   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:56.323380   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:56.323416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:56.337259   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:56.337289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:56.362908   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:56.362939   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:56.443333   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:56.443372   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:56.522467   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:56.511318    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.512215    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.514040    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.515835    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.516378    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:56.511318    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.512215    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.514040    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.515835    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.516378    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:56.522485   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:56.522498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:56.561809   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:56.561843   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:56.606943   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:56.606979   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:56.678268   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:56.678310   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:56.707280   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:56.707309   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:56.736890   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:56.736917   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:59.286954   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:59.298376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:59.298447   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:59.325376   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:59.325399   92925 cri.go:89] found id: ""
	I1213 19:13:59.325407   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:59.325464   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.329049   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:59.329123   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:59.356066   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:59.356085   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:59.356089   92925 cri.go:89] found id: ""
	I1213 19:13:59.356097   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:59.356150   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.360113   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.363660   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:59.363736   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:59.389568   92925 cri.go:89] found id: ""
	I1213 19:13:59.389594   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.389604   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:59.389611   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:59.389692   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:59.423243   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:59.423266   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:59.423270   92925 cri.go:89] found id: ""
	I1213 19:13:59.423278   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:59.423350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.426944   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.431770   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:59.431844   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:59.458103   92925 cri.go:89] found id: ""
	I1213 19:13:59.458173   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.458220   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:59.458246   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:59.458332   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:59.487250   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:59.487324   92925 cri.go:89] found id: ""
	I1213 19:13:59.487340   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:59.487406   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.491784   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:59.491852   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:59.525717   92925 cri.go:89] found id: ""
	I1213 19:13:59.525739   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.525748   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:59.525756   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:59.525768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:59.554063   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:59.554091   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:59.599874   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:59.599909   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:59.626733   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:59.626765   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:59.700778   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:59.700814   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:59.713358   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:59.713388   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:59.783137   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:59.774677   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.775356   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.776867   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.777580   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.778486   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:59.774677   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.775356   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.776867   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.777580   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.778486   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:59.783158   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:59.783169   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:59.832218   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:59.832248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:59.901253   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:59.901329   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:59.930678   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:59.930701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:59.962070   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:59.962099   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:02.744450   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:02.755514   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:02.755587   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:02.782984   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:02.783079   92925 cri.go:89] found id: ""
	I1213 19:14:02.783095   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:02.783157   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.787187   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:02.787262   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:02.814931   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:02.814954   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:02.814959   92925 cri.go:89] found id: ""
	I1213 19:14:02.814967   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:02.815031   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.818983   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.822788   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:02.822865   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:02.848942   92925 cri.go:89] found id: ""
	I1213 19:14:02.848966   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.848975   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:02.848991   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:02.849096   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:02.876134   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:02.876155   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:02.876160   92925 cri.go:89] found id: ""
	I1213 19:14:02.876168   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:02.876249   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.880576   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.885335   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:02.885459   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:02.913660   92925 cri.go:89] found id: ""
	I1213 19:14:02.913733   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.913763   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:02.913802   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:02.913924   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:02.940178   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:02.940248   92925 cri.go:89] found id: ""
	I1213 19:14:02.940270   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:02.940359   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.944376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:02.944500   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:02.975815   92925 cri.go:89] found id: ""
	I1213 19:14:02.975838   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.975846   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:02.975855   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:02.975867   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:03.074688   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:03.074723   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:03.156277   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:03.147816   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.148501   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150174   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150777   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.152270   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:03.147816   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.148501   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150174   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150777   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.152270   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:03.156299   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:03.156311   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:03.182450   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:03.182477   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:03.221147   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:03.221181   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:03.292920   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:03.292962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:03.323958   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:03.323983   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:03.397255   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:03.397289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:03.410296   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:03.410325   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:03.465930   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:03.465966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:03.497989   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:03.498017   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:06.058798   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:06.069576   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:06.069643   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:06.097652   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:06.097675   92925 cri.go:89] found id: ""
	I1213 19:14:06.097684   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:06.097767   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.103860   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:06.103983   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:06.133321   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:06.133354   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:06.133359   92925 cri.go:89] found id: ""
	I1213 19:14:06.133367   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:06.133434   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.137349   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.140932   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:06.141036   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:06.174768   92925 cri.go:89] found id: ""
	I1213 19:14:06.174796   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.174806   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:06.174813   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:06.174923   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:06.202214   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:06.202245   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:06.202249   92925 cri.go:89] found id: ""
	I1213 19:14:06.202257   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:06.202315   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.206201   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.209869   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:06.209950   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:06.240738   92925 cri.go:89] found id: ""
	I1213 19:14:06.240762   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.240771   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:06.240777   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:06.240838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:06.267045   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:06.267067   92925 cri.go:89] found id: ""
	I1213 19:14:06.267076   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:06.267134   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.270950   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:06.271059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:06.298538   92925 cri.go:89] found id: ""
	I1213 19:14:06.298566   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.298576   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:06.298585   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:06.298600   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:06.401303   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:06.401348   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:06.414599   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:06.414631   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:06.441984   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:06.442056   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:06.481290   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:06.481321   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:06.541131   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:06.541162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:06.614944   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:06.614978   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:06.700895   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:06.700937   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:06.734007   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:06.734036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:06.804578   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:06.795862   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.796443   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798255   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798765   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.800521   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:06.795862   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.796443   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798255   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798765   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.800521   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:06.804604   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:06.804616   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:06.832247   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:06.832275   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.358770   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:09.369376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:09.369446   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:09.397174   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:09.397250   92925 cri.go:89] found id: ""
	I1213 19:14:09.397268   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:09.397341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.401282   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:09.401379   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:09.430806   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:09.430829   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:09.430834   92925 cri.go:89] found id: ""
	I1213 19:14:09.430842   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:09.430895   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.434593   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.437861   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:09.437931   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:09.462972   92925 cri.go:89] found id: ""
	I1213 19:14:09.463040   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.463067   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:09.463087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:09.463154   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:09.489906   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:09.489930   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:09.489935   92925 cri.go:89] found id: ""
	I1213 19:14:09.489943   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:09.490000   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.493996   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.497780   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:09.497895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:09.529207   92925 cri.go:89] found id: ""
	I1213 19:14:09.529232   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.529241   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:09.529280   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:09.529364   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:09.556267   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.556289   92925 cri.go:89] found id: ""
	I1213 19:14:09.556297   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:09.556383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.560687   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:09.560770   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:09.592345   92925 cri.go:89] found id: ""
	I1213 19:14:09.592380   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.592389   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:09.592398   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:09.592410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:09.604889   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:09.604917   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:09.631468   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:09.631498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:09.670679   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:09.670712   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:09.715815   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:09.715851   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.743494   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:09.743523   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:09.775725   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:09.775753   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:09.873965   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:09.874039   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:09.959605   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:09.948036   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.948708   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950229   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950803   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.952453   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:09.948036   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.948708   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950229   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950803   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.952453   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:09.959680   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:09.959707   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:10.051190   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:10.051228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:10.086712   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:10.086738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:12.672644   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:12.683960   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:12.684058   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:12.712689   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:12.712710   92925 cri.go:89] found id: ""
	I1213 19:14:12.712718   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:12.712772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.716732   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:12.716806   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:12.744449   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:12.744468   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:12.744473   92925 cri.go:89] found id: ""
	I1213 19:14:12.744480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:12.744548   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.748558   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.752120   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:12.752195   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:12.779575   92925 cri.go:89] found id: ""
	I1213 19:14:12.779602   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.779611   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:12.779617   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:12.779677   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:12.808259   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:12.808279   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:12.808284   92925 cri.go:89] found id: ""
	I1213 19:14:12.808292   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:12.808348   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.812274   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.816250   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:12.816380   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:12.842528   92925 cri.go:89] found id: ""
	I1213 19:14:12.842556   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.842566   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:12.842572   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:12.842655   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:12.870846   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:12.870916   92925 cri.go:89] found id: ""
	I1213 19:14:12.870939   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:12.871003   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.874709   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:12.874809   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:12.901168   92925 cri.go:89] found id: ""
	I1213 19:14:12.901194   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.901203   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:12.901212   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:12.901224   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:12.993856   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:12.993888   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:13.006289   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:13.006320   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:13.038515   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:13.038544   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:13.101746   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:13.101795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:13.153697   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:13.153736   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:13.183337   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:13.183366   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:13.262960   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:13.262995   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:13.297818   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:13.297845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:13.368622   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:13.360485   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.361349   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363057   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363352   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.364843   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:13.360485   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.361349   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363057   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363352   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.364843   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:13.368650   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:13.368664   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:13.439804   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:13.439843   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:15.976229   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:15.989077   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:15.989247   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:16.020054   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:16.020079   92925 cri.go:89] found id: ""
	I1213 19:14:16.020087   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:16.020158   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.024026   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:16.024118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:16.051647   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:16.051670   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:16.051681   92925 cri.go:89] found id: ""
	I1213 19:14:16.051688   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:16.051772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.055489   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.059115   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:16.059234   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:16.086414   92925 cri.go:89] found id: ""
	I1213 19:14:16.086438   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.086447   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:16.086453   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:16.086513   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:16.118349   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:16.118415   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:16.118434   92925 cri.go:89] found id: ""
	I1213 19:14:16.118458   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:16.118545   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.122398   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.129488   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:16.129561   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:16.156699   92925 cri.go:89] found id: ""
	I1213 19:14:16.156725   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.156734   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:16.156740   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:16.156799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:16.183419   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:16.183444   92925 cri.go:89] found id: ""
	I1213 19:14:16.183465   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:16.183520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.187500   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:16.187599   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:16.213532   92925 cri.go:89] found id: ""
	I1213 19:14:16.213610   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.213634   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:16.213657   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:16.213703   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:16.225956   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:16.225985   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:16.299377   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:16.290117   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.291089   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.292835   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.293694   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.295412   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:16.290117   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.291089   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.292835   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.293694   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.295412   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:16.299401   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:16.299416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:16.327259   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:16.327288   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:16.353346   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:16.353376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:16.380053   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:16.380079   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:16.415886   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:16.415918   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:16.512571   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:16.512605   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:16.557415   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:16.557451   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:16.616391   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:16.616424   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:16.692096   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:16.692131   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:19.277525   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:19.287988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:19.288109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:19.314035   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:19.314055   92925 cri.go:89] found id: ""
	I1213 19:14:19.314064   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:19.314137   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.317785   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:19.317856   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:19.344128   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:19.344151   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:19.344155   92925 cri.go:89] found id: ""
	I1213 19:14:19.344163   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:19.344216   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.348619   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.351872   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:19.351961   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:19.377237   92925 cri.go:89] found id: ""
	I1213 19:14:19.377263   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.377272   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:19.377278   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:19.377360   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:19.404210   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:19.404233   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:19.404238   92925 cri.go:89] found id: ""
	I1213 19:14:19.404245   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:19.404318   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.407909   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.411268   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:19.411336   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:19.437051   92925 cri.go:89] found id: ""
	I1213 19:14:19.437075   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.437083   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:19.437089   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:19.437147   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:19.461816   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:19.461847   92925 cri.go:89] found id: ""
	I1213 19:14:19.461856   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:19.461911   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.465492   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:19.465587   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:19.491501   92925 cri.go:89] found id: ""
	I1213 19:14:19.491527   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.491536   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:19.491545   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:19.491588   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:19.530624   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:19.530652   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:19.570388   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:19.570423   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:19.649601   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:19.649638   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:19.682548   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:19.682579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:19.765347   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:19.765383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:19.797401   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:19.797430   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:19.892983   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:19.893036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:19.905252   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:19.905281   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:19.976038   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:19.968048   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.968518   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.969788   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.970473   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.972132   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:19.968048   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.968518   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.969788   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.970473   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.972132   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:19.976061   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:19.976074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:20.015893   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:20.015932   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:22.580793   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:22.591726   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:22.591801   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:22.617941   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:22.617972   92925 cri.go:89] found id: ""
	I1213 19:14:22.617981   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:22.618039   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.621895   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:22.621967   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:22.648715   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:22.648778   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:22.648797   92925 cri.go:89] found id: ""
	I1213 19:14:22.648821   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:22.648904   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.653305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.657032   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:22.657104   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:22.686906   92925 cri.go:89] found id: ""
	I1213 19:14:22.686932   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.686946   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:22.686952   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:22.687013   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:22.714929   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:22.714951   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:22.714956   92925 cri.go:89] found id: ""
	I1213 19:14:22.714964   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:22.715025   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.719071   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.722714   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:22.722784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:22.750440   92925 cri.go:89] found id: ""
	I1213 19:14:22.750470   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.750480   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:22.750486   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:22.750549   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:22.777550   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:22.777572   92925 cri.go:89] found id: ""
	I1213 19:14:22.777580   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:22.777635   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.781380   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:22.781475   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:22.816511   92925 cri.go:89] found id: ""
	I1213 19:14:22.816537   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.816547   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:22.816572   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:22.816617   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:22.842295   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:22.842322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:22.882060   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:22.882095   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:22.965336   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:22.965374   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:22.995696   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:22.995731   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:23.098694   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:23.098782   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:23.117712   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:23.117743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:23.167456   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:23.167497   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:23.195171   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:23.195199   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:23.279228   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:23.279264   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:23.318709   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:23.318738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:23.384532   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:23.376056   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.376628   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.378283   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379367   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379806   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:23.376056   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.376628   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.378283   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379367   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379806   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:25.885566   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:25.896623   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:25.896696   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:25.924503   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:25.924535   92925 cri.go:89] found id: ""
	I1213 19:14:25.924544   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:25.924601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.928341   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:25.928413   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:25.966385   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:25.966404   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:25.966409   92925 cri.go:89] found id: ""
	I1213 19:14:25.966417   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:25.966471   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.970190   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.974101   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:25.974229   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:26.004380   92925 cri.go:89] found id: ""
	I1213 19:14:26.004456   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.004479   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:26.004498   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:26.004595   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:26.031828   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:26.031853   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:26.031860   92925 cri.go:89] found id: ""
	I1213 19:14:26.031868   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:26.031925   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.036387   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.040161   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:26.040235   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:26.070525   92925 cri.go:89] found id: ""
	I1213 19:14:26.070591   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.070616   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:26.070635   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:26.070724   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:26.108253   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:26.108277   92925 cri.go:89] found id: ""
	I1213 19:14:26.108294   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:26.108373   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.112191   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:26.112324   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:26.146018   92925 cri.go:89] found id: ""
	I1213 19:14:26.146042   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.146052   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:26.146060   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:26.146094   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:26.187197   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:26.187229   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:26.232694   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:26.232724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:26.310398   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:26.310435   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:26.323748   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:26.323775   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:26.350662   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:26.350689   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:26.380636   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:26.380707   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:26.407064   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:26.407089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:26.483950   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:26.483984   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:26.536817   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:26.536846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:26.654750   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:26.654801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:26.733679   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:26.725319   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.726046   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.727714   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.728228   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.729870   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:26.725319   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.726046   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.727714   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.728228   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.729870   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:29.233968   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:29.244666   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:29.244746   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:29.272994   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:29.273043   92925 cri.go:89] found id: ""
	I1213 19:14:29.273051   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:29.273108   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.277950   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:29.278022   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:29.304315   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:29.304334   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:29.304338   92925 cri.go:89] found id: ""
	I1213 19:14:29.304346   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:29.304402   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.308379   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.311905   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:29.311974   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:29.337925   92925 cri.go:89] found id: ""
	I1213 19:14:29.337953   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.337962   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:29.337968   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:29.338028   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:29.365135   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:29.365156   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:29.365160   92925 cri.go:89] found id: ""
	I1213 19:14:29.365167   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:29.365222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.368867   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.372263   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:29.372334   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:29.403367   92925 cri.go:89] found id: ""
	I1213 19:14:29.403393   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.403402   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:29.403408   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:29.403466   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:29.429639   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:29.429703   92925 cri.go:89] found id: ""
	I1213 19:14:29.429718   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:29.429782   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.433301   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:29.433373   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:29.460244   92925 cri.go:89] found id: ""
	I1213 19:14:29.460272   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.460282   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:29.460291   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:29.460302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:29.555127   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:29.555166   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:29.583790   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:29.583827   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:29.646377   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:29.646409   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:29.720554   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:29.720592   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:29.751659   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:29.751686   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:29.788857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:29.788883   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:29.800809   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:29.800844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:29.869250   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:29.862112   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.862682   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864146   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864555   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.865755   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:29.862112   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.862682   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864146   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864555   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.865755   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:29.869274   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:29.869287   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:29.913688   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:29.913724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:29.956382   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:29.956408   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:32.553678   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:32.565396   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:32.565470   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:32.592588   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:32.592613   92925 cri.go:89] found id: ""
	I1213 19:14:32.592622   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:32.592684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.596429   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:32.596509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:32.624469   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:32.624493   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:32.624499   92925 cri.go:89] found id: ""
	I1213 19:14:32.624506   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:32.624559   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.628270   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.631873   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:32.632003   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:32.657120   92925 cri.go:89] found id: ""
	I1213 19:14:32.657144   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.657153   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:32.657159   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:32.657220   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:32.684878   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:32.684901   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:32.684906   92925 cri.go:89] found id: ""
	I1213 19:14:32.684914   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:32.684976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.689235   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.692754   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:32.692825   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:32.722855   92925 cri.go:89] found id: ""
	I1213 19:14:32.722878   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.722887   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:32.722893   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:32.722952   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:32.753685   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:32.753704   92925 cri.go:89] found id: ""
	I1213 19:14:32.753712   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:32.753764   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.758129   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:32.758214   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:32.784526   92925 cri.go:89] found id: ""
	I1213 19:14:32.784599   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.784623   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:32.784645   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:32.784683   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:32.826015   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:32.826050   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:32.915444   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:32.915483   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:32.943132   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:32.943167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:33.017904   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:33.017945   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:33.050228   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:33.050258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:33.122559   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:33.114436   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.115150   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.116863   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.117500   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.118980   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:33.114436   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.115150   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.116863   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.117500   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.118980   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:33.122583   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:33.122597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:33.177421   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:33.177455   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:33.206989   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:33.207016   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:33.305130   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:33.305169   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:33.319318   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:33.319416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:35.847899   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:35.859028   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:35.859101   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:35.887722   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:35.887745   92925 cri.go:89] found id: ""
	I1213 19:14:35.887754   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:35.887807   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.891699   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:35.891771   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:35.920114   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:35.920138   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:35.920144   92925 cri.go:89] found id: ""
	I1213 19:14:35.920152   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:35.920222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.923937   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.927605   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:35.927678   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:35.953980   92925 cri.go:89] found id: ""
	I1213 19:14:35.954007   92925 logs.go:282] 0 containers: []
	W1213 19:14:35.954016   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:35.954023   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:35.954080   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:35.980645   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:35.980665   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:35.980670   92925 cri.go:89] found id: ""
	I1213 19:14:35.980678   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:35.980742   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.991946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.996641   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:35.996726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:36.026202   92925 cri.go:89] found id: ""
	I1213 19:14:36.026228   92925 logs.go:282] 0 containers: []
	W1213 19:14:36.026238   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:36.026245   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:36.026350   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:36.051979   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:36.052001   92925 cri.go:89] found id: ""
	I1213 19:14:36.052010   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:36.052066   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:36.055868   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:36.055938   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:36.083649   92925 cri.go:89] found id: ""
	I1213 19:14:36.083675   92925 logs.go:282] 0 containers: []
	W1213 19:14:36.083685   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:36.083693   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:36.083704   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:36.164414   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:36.164464   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:36.198766   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:36.198793   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:36.298985   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:36.299028   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:36.346466   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:36.346498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:36.376231   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:36.376258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:36.403571   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:36.403597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:36.417684   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:36.417714   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:36.487562   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:36.479494   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.480246   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.481848   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.482211   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.483808   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:36.479494   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.480246   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.481848   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.482211   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.483808   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:36.487585   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:36.487597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:36.514488   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:36.514514   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:36.559954   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:36.559990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:39.133526   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:39.150754   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:39.150826   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:39.179295   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:39.179315   92925 cri.go:89] found id: ""
	I1213 19:14:39.179324   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:39.179380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.185538   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:39.185605   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:39.216427   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:39.216449   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:39.216454   92925 cri.go:89] found id: ""
	I1213 19:14:39.216462   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:39.216517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.221041   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.225622   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:39.225691   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:39.251922   92925 cri.go:89] found id: ""
	I1213 19:14:39.251946   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.251955   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:39.251961   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:39.252019   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:39.281875   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:39.281900   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:39.281905   92925 cri.go:89] found id: ""
	I1213 19:14:39.281912   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:39.281970   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.286420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.290568   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:39.290663   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:39.315894   92925 cri.go:89] found id: ""
	I1213 19:14:39.315996   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.316021   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:39.316041   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:39.316153   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:39.344960   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:39.344983   92925 cri.go:89] found id: ""
	I1213 19:14:39.344992   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:39.345091   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.348776   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:39.348847   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:39.378840   92925 cri.go:89] found id: ""
	I1213 19:14:39.378862   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.378870   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:39.378879   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:39.378890   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:39.410058   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:39.410087   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:39.510110   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:39.510188   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:39.542821   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:39.542892   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:39.614365   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:39.605214   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.606127   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.607756   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.608303   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.610109   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:39.605214   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.606127   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.607756   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.608303   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.610109   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:39.614387   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:39.614403   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:39.656166   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:39.656199   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:39.700850   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:39.700887   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:39.735225   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:39.735267   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:39.765360   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:39.765396   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:39.856068   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:39.856115   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:39.883708   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:39.883738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.458661   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:42.469945   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:42.470018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:42.497805   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:42.497831   92925 cri.go:89] found id: ""
	I1213 19:14:42.497840   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:42.497898   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.502059   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:42.502128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:42.534485   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:42.534509   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:42.534514   92925 cri.go:89] found id: ""
	I1213 19:14:42.534521   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:42.534578   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.539929   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.544534   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:42.544618   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:42.572959   92925 cri.go:89] found id: ""
	I1213 19:14:42.572983   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.572991   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:42.572998   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:42.573085   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:42.605231   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.605253   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:42.605257   92925 cri.go:89] found id: ""
	I1213 19:14:42.605265   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:42.605324   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.609379   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.613098   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:42.613183   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:42.641856   92925 cri.go:89] found id: ""
	I1213 19:14:42.641881   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.641890   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:42.641897   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:42.641956   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:42.670835   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:42.670862   92925 cri.go:89] found id: ""
	I1213 19:14:42.670870   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:42.670923   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.674669   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:42.674780   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:42.701820   92925 cri.go:89] found id: ""
	I1213 19:14:42.701886   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.701912   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:42.701935   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:42.701974   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:42.795111   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:42.795148   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:42.843272   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:42.843308   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.918660   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:42.918701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:42.953437   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:42.953470   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:42.980705   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:42.980735   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:43.075228   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:43.075266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:43.089833   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:43.089865   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:43.165554   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:43.156189   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.157143   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.158950   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.160521   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.161743   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:43.156189   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.157143   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.158950   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.160521   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.161743   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:43.165619   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:43.165648   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:43.195772   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:43.195850   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:43.266745   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:43.266781   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:45.800090   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:45.811228   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:45.811319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:45.844476   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:45.844562   92925 cri.go:89] found id: ""
	I1213 19:14:45.844585   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:45.844658   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.848635   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:45.848730   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:45.878507   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:45.878532   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:45.878537   92925 cri.go:89] found id: ""
	I1213 19:14:45.878545   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:45.878626   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.883362   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.887015   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:45.887090   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:45.922472   92925 cri.go:89] found id: ""
	I1213 19:14:45.922495   92925 logs.go:282] 0 containers: []
	W1213 19:14:45.922504   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:45.922510   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:45.922571   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:45.961736   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:45.961766   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:45.961772   92925 cri.go:89] found id: ""
	I1213 19:14:45.961779   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:45.961846   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.965883   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.969985   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:45.970062   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:46.005121   92925 cri.go:89] found id: ""
	I1213 19:14:46.005143   92925 logs.go:282] 0 containers: []
	W1213 19:14:46.005153   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:46.005159   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:46.005218   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:46.033851   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:46.033871   92925 cri.go:89] found id: ""
	I1213 19:14:46.033878   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:46.033932   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:46.037737   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:46.037813   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:46.064426   92925 cri.go:89] found id: ""
	I1213 19:14:46.064493   92925 logs.go:282] 0 containers: []
	W1213 19:14:46.064517   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:46.064541   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:46.064580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:46.162246   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:46.162285   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:46.175470   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:46.175500   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:46.249273   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:46.239319   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.240280   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242150   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242816   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.244382   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:46.239319   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.240280   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242150   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242816   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.244382   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:46.249333   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:46.249347   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:46.277985   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:46.278016   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:46.332032   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:46.332065   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:46.376410   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:46.376446   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:46.455695   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:46.455772   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:46.485453   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:46.485479   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:46.522886   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:46.522916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:46.601217   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:46.601253   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:49.142956   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:49.157230   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:49.157309   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:49.185733   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:49.185767   92925 cri.go:89] found id: ""
	I1213 19:14:49.185775   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:49.185830   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.190180   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:49.190249   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:49.218248   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:49.218271   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:49.218276   92925 cri.go:89] found id: ""
	I1213 19:14:49.218285   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:49.218343   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.222331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.226027   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:49.226107   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:49.258473   92925 cri.go:89] found id: ""
	I1213 19:14:49.258496   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.258504   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:49.258512   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:49.258570   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:49.285496   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:49.285560   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:49.285578   92925 cri.go:89] found id: ""
	I1213 19:14:49.285601   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:49.285684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.291508   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.296197   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:49.296358   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:49.325094   92925 cri.go:89] found id: ""
	I1213 19:14:49.325119   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.325127   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:49.325134   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:49.325193   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:49.350750   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:49.350777   92925 cri.go:89] found id: ""
	I1213 19:14:49.350794   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:49.350857   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.354789   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:49.354915   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:49.381275   92925 cri.go:89] found id: ""
	I1213 19:14:49.381302   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.381311   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:49.381320   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:49.381331   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:49.473722   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:49.473760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:49.486016   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:49.486083   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:49.523030   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:49.523060   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:49.602664   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:49.602699   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:49.685307   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:49.685343   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:49.720678   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:49.720706   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:49.787762   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:49.779084   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.779733   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.781504   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.782055   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.783675   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:49.779084   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.779733   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.781504   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.782055   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.783675   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:49.787782   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:49.787795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:49.826153   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:49.826188   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:49.871719   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:49.871752   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:49.902768   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:49.902858   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:52.432900   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:52.443527   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:52.443639   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:52.470204   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:52.470237   92925 cri.go:89] found id: ""
	I1213 19:14:52.470247   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:52.470302   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.473971   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:52.474058   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:52.501963   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:52.501983   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:52.501987   92925 cri.go:89] found id: ""
	I1213 19:14:52.501994   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:52.502048   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.505744   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.509295   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:52.509368   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:52.534850   92925 cri.go:89] found id: ""
	I1213 19:14:52.534917   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.534943   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:52.534959   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:52.535033   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:52.570973   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:52.571045   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:52.571066   92925 cri.go:89] found id: ""
	I1213 19:14:52.571086   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:52.571156   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.574824   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.578317   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:52.578384   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:52.606849   92925 cri.go:89] found id: ""
	I1213 19:14:52.606873   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.606882   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:52.606888   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:52.606945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:52.633073   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:52.633095   92925 cri.go:89] found id: ""
	I1213 19:14:52.633103   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:52.633169   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.636819   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:52.636895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:52.663310   92925 cri.go:89] found id: ""
	I1213 19:14:52.663333   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.663342   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:52.663350   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:52.663363   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:52.732904   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:52.724948   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.725610   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727167   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727671   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.729366   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:52.724948   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.725610   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727167   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727671   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.729366   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:52.732929   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:52.732943   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:52.771098   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:52.771129   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:52.846025   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:52.846063   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:52.888075   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:52.888104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:52.992414   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:52.992452   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:53.007058   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:53.007089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:53.034812   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:53.034841   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:53.078790   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:53.078828   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:53.134673   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:53.134708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:53.162943   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:53.162969   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:55.740743   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:55.751731   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:55.751816   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:55.779888   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:55.779908   92925 cri.go:89] found id: ""
	I1213 19:14:55.779916   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:55.779976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.783761   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:55.783831   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:55.810156   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:55.810175   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:55.810185   92925 cri.go:89] found id: ""
	I1213 19:14:55.810192   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:55.810252   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.814013   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.817577   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:55.817649   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:55.843468   92925 cri.go:89] found id: ""
	I1213 19:14:55.843491   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.843499   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:55.843505   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:55.843561   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:55.870048   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:55.870081   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:55.870093   92925 cri.go:89] found id: ""
	I1213 19:14:55.870100   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:55.870158   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.874026   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.877764   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:55.877852   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:55.907873   92925 cri.go:89] found id: ""
	I1213 19:14:55.907900   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.907909   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:55.907915   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:55.907976   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:55.934710   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:55.934732   92925 cri.go:89] found id: ""
	I1213 19:14:55.934740   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:55.934795   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.938598   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:55.938671   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:55.968271   92925 cri.go:89] found id: ""
	I1213 19:14:55.968337   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.968361   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:55.968387   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:55.968416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:56.002213   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:56.002285   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:56.029658   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:56.029741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:56.125956   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:56.126039   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:56.139465   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:56.139492   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:56.191699   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:56.191735   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:56.278131   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:56.278179   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:56.314251   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:56.314283   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:56.383224   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:56.373948   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.374799   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.376672   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.377083   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.378823   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:56.373948   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.374799   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.376672   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.377083   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.378823   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:56.383248   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:56.383261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:56.410961   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:56.410990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:56.450595   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:56.450633   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.032642   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:59.043619   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:59.043712   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:59.070836   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:59.070859   92925 cri.go:89] found id: ""
	I1213 19:14:59.070867   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:59.070934   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.074933   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:59.075009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:59.112290   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:59.112313   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:59.112318   92925 cri.go:89] found id: ""
	I1213 19:14:59.112325   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:59.112380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.117374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.121073   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:59.121166   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:59.159645   92925 cri.go:89] found id: ""
	I1213 19:14:59.159714   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.159741   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:59.159763   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:59.159838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:59.193406   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.193430   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:59.193435   92925 cri.go:89] found id: ""
	I1213 19:14:59.193443   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:59.193524   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.197329   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.201001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:59.201109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:59.227682   92925 cri.go:89] found id: ""
	I1213 19:14:59.227706   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.227715   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:59.227721   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:59.227784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:59.254466   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:59.254497   92925 cri.go:89] found id: ""
	I1213 19:14:59.254505   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:59.254561   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.258458   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:59.258530   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:59.285792   92925 cri.go:89] found id: ""
	I1213 19:14:59.285817   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.285826   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:59.285835   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:59.285851   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:59.312955   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:59.312990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:59.394158   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:59.394195   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:59.439055   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:59.439084   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:59.452200   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:59.452253   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:59.543624   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:59.535183   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.536016   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.537681   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.538269   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.539987   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:59.535183   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.536016   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.537681   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.538269   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.539987   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:59.543645   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:59.543659   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:59.571506   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:59.571533   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:59.615595   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:59.615634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:59.717216   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:59.717256   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:59.764205   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:59.764243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.840500   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:59.840538   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.367252   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:02.379179   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:02.379252   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:02.407368   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:02.407394   92925 cri.go:89] found id: ""
	I1213 19:15:02.407402   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:02.407464   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.411245   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:02.411321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:02.439707   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:02.439727   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:02.439732   92925 cri.go:89] found id: ""
	I1213 19:15:02.439739   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:02.439793   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.443520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.447838   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:02.447965   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:02.475049   92925 cri.go:89] found id: ""
	I1213 19:15:02.475077   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.475086   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:02.475093   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:02.475153   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:02.509558   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:02.509582   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.509587   92925 cri.go:89] found id: ""
	I1213 19:15:02.509595   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:02.509652   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.513964   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.519816   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:02.519888   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:02.549572   92925 cri.go:89] found id: ""
	I1213 19:15:02.549639   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.549653   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:02.549660   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:02.549720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:02.578189   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:02.578215   92925 cri.go:89] found id: ""
	I1213 19:15:02.578224   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:02.578287   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.582094   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:02.582166   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:02.609748   92925 cri.go:89] found id: ""
	I1213 19:15:02.609774   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.609783   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:02.609792   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:02.609823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:02.660274   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:02.660313   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:02.737557   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:02.737590   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:02.821155   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:02.821193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:02.853468   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:02.853501   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:02.866631   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:02.866661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:02.895294   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:02.895323   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:02.940697   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:02.940734   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.970055   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:02.970088   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:03.002379   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:03.002409   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:03.096355   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:03.096390   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:03.189863   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:03.181408   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.182165   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.183899   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.184754   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.186389   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:03.181408   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.182165   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.183899   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.184754   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.186389   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:05.690514   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:05.702677   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:05.702772   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:05.730136   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:05.730160   92925 cri.go:89] found id: ""
	I1213 19:15:05.730169   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:05.730226   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.733966   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:05.734047   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:05.761337   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:05.761404   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:05.761425   92925 cri.go:89] found id: ""
	I1213 19:15:05.761450   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:05.761534   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.766511   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.770470   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:05.770545   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:05.803220   92925 cri.go:89] found id: ""
	I1213 19:15:05.803284   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.803300   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:05.803306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:05.803383   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:05.831772   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:05.831797   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:05.831803   92925 cri.go:89] found id: ""
	I1213 19:15:05.831810   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:05.831869   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.835814   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.839281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:05.839351   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:05.870011   92925 cri.go:89] found id: ""
	I1213 19:15:05.870038   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.870059   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:05.870065   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:05.870126   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:05.898850   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:05.898877   92925 cri.go:89] found id: ""
	I1213 19:15:05.898888   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:05.898943   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.903063   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:05.903177   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:05.930061   92925 cri.go:89] found id: ""
	I1213 19:15:05.930126   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.930140   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:05.930150   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:05.930164   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:05.943518   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:05.943549   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:05.973699   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:05.973729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:06.024591   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:06.024622   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:06.131997   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:06.132041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:06.202110   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:06.193932   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.195174   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.196901   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.197593   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.198598   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:06.193932   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.195174   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.196901   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.197593   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.198598   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:06.202133   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:06.202145   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:06.241491   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:06.241525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:06.289002   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:06.289076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:06.376385   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:06.376422   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:06.406893   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:06.406920   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:06.438586   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:06.438615   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:09.021141   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:09.032497   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:09.032597   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:09.061840   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:09.061871   92925 cri.go:89] found id: ""
	I1213 19:15:09.061881   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:09.061939   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.065632   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:09.065706   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:09.094419   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:09.094444   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:09.094449   92925 cri.go:89] found id: ""
	I1213 19:15:09.094456   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:09.094517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.098305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.108354   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:09.108432   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:09.137672   92925 cri.go:89] found id: ""
	I1213 19:15:09.137706   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.137716   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:09.137722   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:09.137785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:09.170831   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:09.170854   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:09.170859   92925 cri.go:89] found id: ""
	I1213 19:15:09.170866   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:09.170929   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.174672   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.177949   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:09.178023   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:09.208255   92925 cri.go:89] found id: ""
	I1213 19:15:09.208282   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.208291   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:09.208297   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:09.208352   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:09.234350   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:09.234373   92925 cri.go:89] found id: ""
	I1213 19:15:09.234381   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:09.234453   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.238030   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:09.238102   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:09.264310   92925 cri.go:89] found id: ""
	I1213 19:15:09.264335   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.264344   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:09.264352   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:09.264365   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:09.295245   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:09.295276   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:09.369835   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:09.369869   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:09.472350   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:09.472384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:09.500555   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:09.500589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:09.535996   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:09.536032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:09.552067   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:09.552096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:09.624766   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:09.616285   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.617238   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.618950   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.619348   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.620912   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:09.616285   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.617238   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.618950   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.619348   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.620912   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:09.624810   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:09.624823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:09.654769   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:09.654796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:09.695636   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:09.695711   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:09.740840   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:09.740873   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.330150   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:12.341327   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:12.341430   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:12.373666   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:12.373692   92925 cri.go:89] found id: ""
	I1213 19:15:12.373699   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:12.373760   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.377493   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:12.377563   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:12.407860   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:12.407882   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:12.407886   92925 cri.go:89] found id: ""
	I1213 19:15:12.407897   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:12.407965   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.411939   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.416613   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:12.416687   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:12.447044   92925 cri.go:89] found id: ""
	I1213 19:15:12.447071   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.447080   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:12.447086   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:12.447149   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:12.474565   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.474599   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:12.474604   92925 cri.go:89] found id: ""
	I1213 19:15:12.474612   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:12.474669   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.478501   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.482327   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:12.482425   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:12.519207   92925 cri.go:89] found id: ""
	I1213 19:15:12.519235   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.519245   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:12.519252   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:12.519330   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:12.548236   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:12.548259   92925 cri.go:89] found id: ""
	I1213 19:15:12.548269   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:12.548334   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.552167   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:12.552292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:12.581061   92925 cri.go:89] found id: ""
	I1213 19:15:12.581086   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.581094   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:12.581103   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:12.581115   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:12.626762   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:12.626795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:12.676771   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:12.676803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:12.708623   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:12.708661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:12.735332   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:12.735361   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:12.830566   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:12.830606   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:12.858035   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:12.858107   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.953406   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:12.953445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:13.037585   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:13.037626   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:13.070076   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:13.070108   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:13.083239   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:13.083266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:13.171369   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:13.163050   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.163831   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.165471   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.166105   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.167624   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:13.163050   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.163831   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.165471   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.166105   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.167624   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:15.672265   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:15.683518   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:15.683589   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:15.713736   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:15.713764   92925 cri.go:89] found id: ""
	I1213 19:15:15.713773   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:15.713845   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.718041   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:15.718116   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:15.745439   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:15.745462   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:15.745467   92925 cri.go:89] found id: ""
	I1213 19:15:15.745475   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:15.745555   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.749679   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.753271   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:15.753343   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:15.780766   92925 cri.go:89] found id: ""
	I1213 19:15:15.780791   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.780800   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:15.780806   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:15.780867   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:15.809433   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:15.809453   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:15.809458   92925 cri.go:89] found id: ""
	I1213 19:15:15.809466   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:15.809521   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.813350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.816829   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:15.816899   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:15.843466   92925 cri.go:89] found id: ""
	I1213 19:15:15.843491   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.843501   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:15.843507   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:15.843566   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:15.869979   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:15.870003   92925 cri.go:89] found id: ""
	I1213 19:15:15.870012   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:15.870069   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.873941   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:15.874036   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:15.906204   92925 cri.go:89] found id: ""
	I1213 19:15:15.906268   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.906283   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:15.906293   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:15.906305   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:16.002221   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:16.002261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:16.030993   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:16.031024   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:16.078933   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:16.078967   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:16.173955   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:16.174010   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:16.207960   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:16.207989   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:16.221095   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:16.221124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:16.290865   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:16.280288   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.281366   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.282142   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.283740   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.284314   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:16.280288   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.281366   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.282142   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.283740   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.284314   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:16.290940   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:16.290969   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:16.330431   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:16.330462   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:16.403747   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:16.403785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:16.435000   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:16.435076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:18.967118   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:18.978473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:18.978548   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:19.009416   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:19.009442   92925 cri.go:89] found id: ""
	I1213 19:15:19.009450   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:19.009506   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.013229   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:19.013304   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:19.046195   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:19.046217   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:19.046221   92925 cri.go:89] found id: ""
	I1213 19:15:19.046228   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:19.046284   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.050380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.055287   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:19.055364   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:19.084697   92925 cri.go:89] found id: ""
	I1213 19:15:19.084724   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.084734   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:19.084740   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:19.084799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:19.134188   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:19.134212   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:19.134217   92925 cri.go:89] found id: ""
	I1213 19:15:19.134225   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:19.134281   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.139452   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.143380   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:19.143515   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:19.176707   92925 cri.go:89] found id: ""
	I1213 19:15:19.176733   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.176742   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:19.176748   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:19.176808   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:19.205658   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:19.205681   92925 cri.go:89] found id: ""
	I1213 19:15:19.205689   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:19.205769   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.209480   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:19.209556   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:19.236187   92925 cri.go:89] found id: ""
	I1213 19:15:19.236210   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.236219   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:19.236227   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:19.236239   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:19.335347   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:19.335384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:19.347594   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:19.347622   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:19.423749   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:19.415662   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.416536   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418222   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418572   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.420106   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:19.415662   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.416536   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418222   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418572   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.420106   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:19.423773   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:19.423785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:19.458293   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:19.458322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:19.491891   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:19.491981   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:19.532203   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:19.532289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:19.572383   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:19.572416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:19.623843   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:19.623878   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:19.701590   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:19.701669   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:19.730646   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:19.730674   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:22.313136   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:22.324070   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:22.324192   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:22.354911   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:22.354936   92925 cri.go:89] found id: ""
	I1213 19:15:22.354944   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:22.355017   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.359138   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:22.359232   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:22.387533   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:22.387553   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:22.387559   92925 cri.go:89] found id: ""
	I1213 19:15:22.387567   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:22.387622   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.391451   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.395283   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:22.395396   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:22.424307   92925 cri.go:89] found id: ""
	I1213 19:15:22.424330   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.424338   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:22.424345   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:22.424406   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:22.453085   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:22.453146   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:22.453167   92925 cri.go:89] found id: ""
	I1213 19:15:22.453192   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:22.453265   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.457420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.461164   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:22.461238   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:22.491907   92925 cri.go:89] found id: ""
	I1213 19:15:22.491930   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.491939   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:22.491944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:22.492029   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:22.527521   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:22.527588   92925 cri.go:89] found id: ""
	I1213 19:15:22.527615   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:22.527710   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.531946   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:22.532027   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:22.559453   92925 cri.go:89] found id: ""
	I1213 19:15:22.559480   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.559499   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:22.559510   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:22.559522   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:22.601772   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:22.601808   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:22.649158   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:22.649193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:22.676639   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:22.676667   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:22.777850   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:22.777888   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:22.851444   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:22.842501   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.843358   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.845491   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.846536   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.847439   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:22.842501   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.843358   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.845491   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.846536   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.847439   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:22.851468   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:22.851480   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:22.933320   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:22.933358   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:22.962559   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:22.962589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:23.059725   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:23.059803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:23.109255   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:23.109286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:23.122814   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:23.122844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:25.651780   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:25.662957   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:25.663032   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:25.696971   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:25.696993   92925 cri.go:89] found id: ""
	I1213 19:15:25.697001   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:25.697087   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.701838   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:25.701919   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:25.738295   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:25.738373   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:25.738386   92925 cri.go:89] found id: ""
	I1213 19:15:25.738395   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:25.738459   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.742364   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.746297   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:25.746400   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:25.772105   92925 cri.go:89] found id: ""
	I1213 19:15:25.772178   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.772201   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:25.772221   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:25.772305   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:25.799458   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:25.799526   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:25.799546   92925 cri.go:89] found id: ""
	I1213 19:15:25.799570   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:25.799645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.803647   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.807583   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:25.807695   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:25.834975   92925 cri.go:89] found id: ""
	I1213 19:15:25.835051   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.835066   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:25.835073   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:25.835133   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:25.864722   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:25.864769   92925 cri.go:89] found id: ""
	I1213 19:15:25.864778   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:25.864836   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.868764   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:25.868838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:25.897111   92925 cri.go:89] found id: ""
	I1213 19:15:25.897133   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.897141   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:25.897162   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:25.897174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:26.007072   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:26.007104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:26.025166   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:26.025201   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:26.111354   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:26.097401   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.097781   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105030   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105458   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.107065   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:26.097401   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.097781   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105030   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105458   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.107065   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:26.111374   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:26.111387   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:26.141476   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:26.141507   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:26.169374   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:26.169404   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:26.246093   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:26.246133   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:26.297802   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:26.297829   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:26.325154   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:26.325182   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:26.368489   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:26.368524   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:26.414072   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:26.414110   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.001164   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:29.013204   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:29.013272   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:29.047888   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:29.047909   92925 cri.go:89] found id: ""
	I1213 19:15:29.047918   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:29.047982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.051890   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:29.051971   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:29.077464   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:29.077486   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:29.077490   92925 cri.go:89] found id: ""
	I1213 19:15:29.077498   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:29.077553   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.081462   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.084988   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:29.085157   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:29.115595   92925 cri.go:89] found id: ""
	I1213 19:15:29.115621   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.115631   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:29.115637   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:29.115697   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:29.160656   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.160729   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:29.160748   92925 cri.go:89] found id: ""
	I1213 19:15:29.160772   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:29.160853   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.165160   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.168775   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:29.168891   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:29.199867   92925 cri.go:89] found id: ""
	I1213 19:15:29.199890   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.199899   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:29.199911   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:29.200009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:29.226478   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:29.226502   92925 cri.go:89] found id: ""
	I1213 19:15:29.226511   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:29.226565   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.230306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:29.230382   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:29.260973   92925 cri.go:89] found id: ""
	I1213 19:15:29.260999   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.261034   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:29.261044   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:29.261060   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:29.288533   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:29.288560   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:29.317072   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:29.317145   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:29.343899   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:29.343926   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:29.424466   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:29.424502   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:29.437265   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:29.437314   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:29.525751   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:29.505457   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.506350   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.518441   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.520261   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.521214   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:29.505457   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.506350   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.518441   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.520261   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.521214   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:29.525774   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:29.525787   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:29.565912   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:29.565947   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:29.614921   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:29.614962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.695191   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:29.695229   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:29.726876   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:29.726907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:32.331342   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:32.342123   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:32.342193   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:32.377492   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:32.377512   92925 cri.go:89] found id: ""
	I1213 19:15:32.377520   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:32.377603   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.381461   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:32.381535   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:32.408828   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:32.408849   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:32.408853   92925 cri.go:89] found id: ""
	I1213 19:15:32.408861   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:32.408913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.412666   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.416683   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:32.416757   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:32.444710   92925 cri.go:89] found id: ""
	I1213 19:15:32.444734   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.444744   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:32.444750   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:32.444842   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:32.470813   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:32.470834   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:32.470839   92925 cri.go:89] found id: ""
	I1213 19:15:32.470846   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:32.470904   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.474746   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.478110   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:32.478180   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:32.505590   92925 cri.go:89] found id: ""
	I1213 19:15:32.505616   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.505625   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:32.505630   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:32.505685   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:32.534851   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:32.534873   92925 cri.go:89] found id: ""
	I1213 19:15:32.534882   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:32.534942   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.538913   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:32.539005   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:32.570980   92925 cri.go:89] found id: ""
	I1213 19:15:32.571020   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.571029   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:32.571055   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:32.571075   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:32.672697   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:32.672739   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:32.685325   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:32.685360   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:32.762805   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:32.754695   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.755445   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.756898   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.757344   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.759247   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:32.754695   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.755445   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.756898   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.757344   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.759247   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:32.762877   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:32.762899   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:32.788216   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:32.788243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:32.831764   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:32.831797   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:32.861451   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:32.861481   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:32.889040   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:32.889113   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:32.962682   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:32.962721   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:33.005926   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:33.005963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:33.113066   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:33.113100   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:35.646466   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:35.657328   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:35.657400   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:35.682772   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:35.682796   92925 cri.go:89] found id: ""
	I1213 19:15:35.682805   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:35.682862   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.686943   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:35.687017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:35.713394   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:35.713426   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:35.713433   92925 cri.go:89] found id: ""
	I1213 19:15:35.713440   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:35.713492   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.717236   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.720957   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:35.721060   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:35.747062   92925 cri.go:89] found id: ""
	I1213 19:15:35.747139   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.747155   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:35.747162   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:35.747223   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:35.780788   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:35.780809   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:35.780814   92925 cri.go:89] found id: ""
	I1213 19:15:35.780822   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:35.780877   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.784913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.788950   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:35.789084   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:35.817183   92925 cri.go:89] found id: ""
	I1213 19:15:35.817206   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.817217   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:35.817223   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:35.817285   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:35.844649   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:35.844674   92925 cri.go:89] found id: ""
	I1213 19:15:35.844682   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:35.844741   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.848694   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:35.848772   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:35.880264   92925 cri.go:89] found id: ""
	I1213 19:15:35.880293   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.880302   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:35.880311   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:35.880323   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:35.928133   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:35.928168   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:36.005056   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:36.005095   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:36.088199   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:36.088234   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:36.195615   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:36.195657   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:36.222570   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:36.222597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:36.253158   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:36.253189   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:36.282294   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:36.282324   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:36.315027   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:36.315057   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:36.327415   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:36.327445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:36.397770   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:36.388485   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.389249   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.391121   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392189   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392759   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:36.388485   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.389249   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.391121   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392189   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392759   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:36.397793   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:36.397809   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:38.950291   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:38.966129   92925 out.go:203] 
	W1213 19:15:38.969186   92925 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 19:15:38.969230   92925 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 19:15:38.969244   92925 out.go:285] * Related issues:
	W1213 19:15:38.969256   92925 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 19:15:38.969271   92925 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 19:15:38.972406   92925 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.008646414Z" level=info msg="Started container" PID=1413 containerID=162b495909eae3cb5f079d5fd260e61e560cd11212e69ad52138f4180f770a5b description=kube-system/storage-provisioner/storage-provisioner id=78f061d7-6d54-48f8-b513-d5c320e8e810 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b4d0206cec1a1b4c0b5752a4babdaf8710471f5502067896b44e2d2df0c4d5b
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.011070102Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=d15204a7-37cc-4d8c-a231-166dcd68a520 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.012539045Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=6b3690d3-7f7d-43f9-95f1-1cd8e6e953ff name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.02550851Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-85rpk/coredns" id=ac3e351b-9839-445c-b06c-72f089234671 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.025812066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.048513937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.049307526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.073222358Z" level=info msg="Created container 98620d4f3c674bb9bab6e41c90c32e2b069e67c18730baafb91af41ae8c19bcf: default/busybox-7b57f96db7-h5qqv/busybox" id=3c28fa9a-be33-4fec-ad16-52c4765c6b6f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.082412808Z" level=info msg="Starting container: 98620d4f3c674bb9bab6e41c90c32e2b069e67c18730baafb91af41ae8c19bcf" id=7ee27ecf-6fea-48b9-9feb-9cb5f5270b26 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.109207129Z" level=info msg="Started container" PID=1422 containerID=98620d4f3c674bb9bab6e41c90c32e2b069e67c18730baafb91af41ae8c19bcf description=default/busybox-7b57f96db7-h5qqv/busybox id=7ee27ecf-6fea-48b9-9feb-9cb5f5270b26 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3641321fd538fed941abd3cee5bdec42be3fbe581a0a743eea30ee6edf2692ee
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.121281524Z" level=info msg="Created container 511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505: kube-system/coredns-66bc5c9577-85rpk/coredns" id=ac3e351b-9839-445c-b06c-72f089234671 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.122743263Z" level=info msg="Starting container: 511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505" id=4e4e597f-bb09-435f-a3da-58627ddb7595 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.124507425Z" level=info msg="Started container" PID=1433 containerID=511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505 description=kube-system/coredns-66bc5c9577-85rpk/coredns id=4e4e597f-bb09-435f-a3da-58627ddb7595 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.122399466Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.129604955Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.129827191Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.129946091Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.139648811Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.139699543Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.139727531Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.147861576Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.148118551Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.148270222Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.153836563Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.154024681Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	511836b213244       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   2                   1d4641fc3fdac       coredns-66bc5c9577-85rpk            kube-system
	98620d4f3c674       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   6 minutes ago       Running             busybox                   2                   3641321fd538f       busybox-7b57f96db7-h5qqv            default
	162b495909eae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Running             storage-provisioner       4                   3b4d0206cec1a       storage-provisioner                 kube-system
	167e9e0789f86       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   6 minutes ago       Running             kube-controller-manager   7                   c35b44e70d6d7       kube-controller-manager-ha-605114   kube-system
	7bc9cb09a081e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   6 minutes ago       Exited              kube-controller-manager   6                   c35b44e70d6d7       kube-controller-manager-ha-605114   kube-system
	76f4d2ef7a334       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Running             kube-vip                  3                   6e0df90fd1fab       kube-vip-ha-605114                  kube-system
	7db7b17ab2144       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   d895cdca857a1       coredns-66bc5c9577-rc9qg            kube-system
	adb6a0d2cd304       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   7 minutes ago       Running             kube-proxy                2                   511ce74a57340       kube-proxy-c6t4v                    kube-system
	f1a416886d288       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               2                   e61041a4c5e3e       kindnet-dtnb7                       kube-system
	9a81ddd488bb7       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   7 minutes ago       Running             etcd                      2                   a40bba21dff67       etcd-ha-605114                      kube-system
	ee202abc8dba3       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   7 minutes ago       Running             kube-scheduler            2                   5a646569f389f       kube-scheduler-ha-605114            kube-system
	3c729bb1538bf       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   7 minutes ago       Running             kube-apiserver            2                   390331a7238b2       kube-apiserver-ha-605114            kube-system
	2b3744a5aa7a9       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Exited              kube-vip                  2                   6e0df90fd1fab       kube-vip-ha-605114                  kube-system
	
	
	==> coredns [511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60720 - 44913 "HINFO IN 3829035828325911617.4912160736216291985. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012907336s
	
	
	==> coredns [7db7b17ab2144a863bb29b6e2f750b6eb865e786cf824a74c0b415ac4077800a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58025 - 60628 "HINFO IN 3868133962360849883.307927823530690311. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.054923758s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-605114
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T18_59_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 18:59:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:15:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 18:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 18:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 18:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 19:00:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-605114
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                8ff9857c-e2f0-4d86-9970-2f9e1bad48df
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-h5qqv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-85rpk             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 coredns-66bc5c9577-rc9qg             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-ha-605114                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-dtnb7                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-605114             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-605114    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-c6t4v                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-605114             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-605114                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m37s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 9m48s                  kube-proxy       
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)      kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-605114 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m45s                  node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           9m31s                  node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           9m11s                  node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   Starting                 7m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m48s (x8 over 7m48s)  kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m48s (x8 over 7m48s)  kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m48s (x8 over 7m48s)  kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m59s                  node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	
	
	Name:               ha-605114-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_13T19_00_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 19:00:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:07:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-605114-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                c9a90528-cc46-44be-a006-2245d1e8d275
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-gqp98                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-605114-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-hxgh6                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-605114-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-605114-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-87qlc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-605114-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-605114-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 9m32s              kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           15m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-605114-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeNotReady             11m                node-controller  Node ha-605114-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           11m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-605114-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m45s              node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           9m31s              node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           9m11s              node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           5m59s              node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   NodeNotReady             5m8s               node-controller  Node ha-605114-m02 status is now: NodeNotReady
	
	
	Name:               ha-605114-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_13T19_02_38_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 19:02:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:07:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-605114-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                1710ae92-5ee6-4178-a2ff-b2523f5ef2e1
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wl925    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 kindnet-9xnpk               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-proxy-lqp4f            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m45s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientPID     13m (x3 over 13m)      kubelet          Node ha-605114-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m (x3 over 13m)      kubelet          Node ha-605114-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x3 over 13m)      kubelet          Node ha-605114-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-605114-m04 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           9m45s                  node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           9m31s                  node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   Starting                 9m16s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m16s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m13s (x8 over 9m16s)  kubelet          Node ha-605114-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m13s (x8 over 9m16s)  kubelet          Node ha-605114-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m13s (x8 over 9m16s)  kubelet          Node ha-605114-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m11s                  node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           5m59s                  node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   NodeNotReady             5m8s                   node-controller  Node ha-605114-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	[Dec13 18:42] overlayfs: idmapped layers are currently not supported
	[Dec13 18:59] overlayfs: idmapped layers are currently not supported
	[ +33.753607] overlayfs: idmapped layers are currently not supported
	[Dec13 19:01] overlayfs: idmapped layers are currently not supported
	[Dec13 19:02] overlayfs: idmapped layers are currently not supported
	[Dec13 19:03] overlayfs: idmapped layers are currently not supported
	[Dec13 19:05] overlayfs: idmapped layers are currently not supported
	[  +4.041925] overlayfs: idmapped layers are currently not supported
	[ +36.958854] overlayfs: idmapped layers are currently not supported
	[Dec13 19:06] overlayfs: idmapped layers are currently not supported
	[Dec13 19:07] overlayfs: idmapped layers are currently not supported
	[  +4.088622] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9a81ddd488bb7e9ca9d20cc8af4e9414463f3bf2bd40edd26c2e9395f731a3ec] <==
	{"level":"info","ts":"2025-12-13T19:09:39.431175Z","caller":"traceutil/trace.go:172","msg":"trace[1650970072] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:2626; }","duration":"129.103507ms","start":"2025-12-13T19:09:39.302064Z","end":"2025-12-13T19:09:39.431167Z","steps":["trace[1650970072] 'agreement among raft nodes before linearized reading'  (duration: 128.051769ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430187Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.351493ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:10000 ","response":"range_response_count:2 size:1908"}
	{"level":"info","ts":"2025-12-13T19:09:39.431486Z","caller":"traceutil/trace.go:172","msg":"trace[706769155] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:2; response_revision:2626; }","duration":"129.64282ms","start":"2025-12-13T19:09:39.301832Z","end":"2025-12-13T19:09:39.431475Z","steps":["trace[706769155] 'agreement among raft nodes before linearized reading'  (duration: 128.305668ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430250Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.518032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 ","response":"range_response_count:12 size:7370"}
	{"level":"info","ts":"2025-12-13T19:09:39.431783Z","caller":"traceutil/trace.go:172","msg":"trace[1208935311] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:12; response_revision:2626; }","duration":"130.043599ms","start":"2025-12-13T19:09:39.301728Z","end":"2025-12-13T19:09:39.431772Z","steps":["trace[1208935311] 'agreement among raft nodes before linearized reading'  (duration: 128.468162ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430267Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.574975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.432082Z","caller":"traceutil/trace.go:172","msg":"trace[1994846449] range","detail":"{range_begin:/registry/csidrivers; range_end:; response_count:0; response_revision:2626; }","duration":"130.383461ms","start":"2025-12-13T19:09:39.301689Z","end":"2025-12-13T19:09:39.432073Z","steps":["trace[1994846449] 'agreement among raft nodes before linearized reading'  (duration: 128.568222ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430286Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.658701ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.432459Z","caller":"traceutil/trace.go:172","msg":"trace[1654927610] range","detail":"{range_begin:/registry/horizontalpodautoscalers; range_end:; response_count:0; response_revision:2626; }","duration":"130.828203ms","start":"2025-12-13T19:09:39.301621Z","end":"2025-12-13T19:09:39.432449Z","steps":["trace[1654927610] 'agreement among raft nodes before linearized reading'  (duration: 128.652579ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430302Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.705978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.432811Z","caller":"traceutil/trace.go:172","msg":"trace[81323615] range","detail":"{range_begin:/registry/configmaps; range_end:; response_count:0; response_revision:2626; }","duration":"131.208952ms","start":"2025-12-13T19:09:39.301593Z","end":"2025-12-13T19:09:39.432802Z","steps":["trace[81323615] 'agreement among raft nodes before linearized reading'  (duration: 128.698922ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430337Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.394351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:kube-controller-manager\" limit:1 ","response":"range_response_count:1 size:1041"}
	{"level":"info","ts":"2025-12-13T19:09:39.433834Z","caller":"traceutil/trace.go:172","msg":"trace[344691668] range","detail":"{range_begin:/registry/clusterroles/system:kube-controller-manager; range_end:; response_count:1; response_revision:2626; }","duration":"135.882151ms","start":"2025-12-13T19:09:39.297939Z","end":"2025-12-13T19:09:39.433821Z","steps":["trace[344691668] 'agreement among raft nodes before linearized reading'  (duration: 132.36844ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430429Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.860031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" limit:10000 ","response":"range_response_count:11 size:18815"}
	{"level":"info","ts":"2025-12-13T19:09:39.434335Z","caller":"traceutil/trace.go:172","msg":"trace[1944125204] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:11; response_revision:2626; }","duration":"136.761335ms","start":"2025-12-13T19:09:39.297564Z","end":"2025-12-13T19:09:39.434326Z","steps":["trace[1944125204] 'agreement among raft nodes before linearized reading'  (duration: 132.783495ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430483Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.832462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" limit:10000 ","response":"range_response_count:4 size:9425"}
	{"level":"info","ts":"2025-12-13T19:09:39.434702Z","caller":"traceutil/trace.go:172","msg":"trace[1630690192] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:4; response_revision:2626; }","duration":"137.0456ms","start":"2025-12-13T19:09:39.297647Z","end":"2025-12-13T19:09:39.434692Z","steps":["trace[1630690192] 'agreement among raft nodes before linearized reading'  (duration: 132.792011ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430503Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.881808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattributesclasses/\" range_end:\"/registry/volumeattributesclasses0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.435067Z","caller":"traceutil/trace.go:172","msg":"trace[1656563266] range","detail":"{range_begin:/registry/volumeattributesclasses/; range_end:/registry/volumeattributesclasses0; response_count:0; response_revision:2626; }","duration":"137.439856ms","start":"2025-12-13T19:09:39.297617Z","end":"2025-12-13T19:09:39.435057Z","steps":["trace[1656563266] 'agreement among raft nodes before linearized reading'  (duration: 132.874046ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430523Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.92591ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.435401Z","caller":"traceutil/trace.go:172","msg":"trace[1716858309] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:2626; }","duration":"137.801578ms","start":"2025-12-13T19:09:39.297590Z","end":"2025-12-13T19:09:39.435392Z","steps":["trace[1716858309] 'agreement among raft nodes before linearized reading'  (duration: 132.919109ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430545Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.039313ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.435723Z","caller":"traceutil/trace.go:172","msg":"trace[380978863] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; response_count:0; response_revision:2626; }","duration":"138.19644ms","start":"2025-12-13T19:09:39.297502Z","end":"2025-12-13T19:09:39.435698Z","steps":["trace[380978863] 'agreement among raft nodes before linearized reading'  (duration: 133.03ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430563Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.034177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.436008Z","caller":"traceutil/trace.go:172","msg":"trace[236711872] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:2626; }","duration":"138.472451ms","start":"2025-12-13T19:09:39.297525Z","end":"2025-12-13T19:09:39.435998Z","steps":["trace[236711872] 'agreement among raft nodes before linearized reading'  (duration: 133.025848ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:15:42 up  1:58,  0 user,  load average: 0.24, 1.20, 1.35
	Linux ha-605114 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f1a416886d288f33359cd21dacc737dbed6a3c975d9323a89f8c93828c040431] <==
	I1213 19:14:55.130038       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:15:05.129284       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:15:05.129384       1 main.go:301] handling current node
	I1213 19:15:05.129423       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:15:05.129456       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:15:05.129610       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:15:05.129647       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:15:15.121671       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:15:15.121778       1 main.go:301] handling current node
	I1213 19:15:15.121805       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:15:15.121812       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:15:15.121970       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:15:15.121984       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:15:25.129931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:15:25.129981       1 main.go:301] handling current node
	I1213 19:15:25.130000       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:15:25.130008       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:15:25.130327       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:15:25.130433       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:15:35.121949       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:15:35.122126       1 main.go:301] handling current node
	I1213 19:15:35.122926       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:15:35.123027       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:15:35.123298       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:15:35.123379       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3c729bb1538bfb45bc9b5542f5524916c96b118344d2be8a42e58a0bc6d4cb0d] <==
	{"level":"warn","ts":"2025-12-13T19:09:39.225607Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012ff680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225637Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014ec3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225654Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029a8780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225669Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fc780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225684Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fd2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231292Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fc1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231412Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019832c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231467Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001982000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231521Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400103ad20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231578Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019b2000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231633Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001f0bc20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231700Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231767Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231831Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028461e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231883Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028461e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231933Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028461e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231988Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001bfa5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.232044Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001bfa5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	W1213 19:09:41.980970       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1213 19:09:41.982698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 19:09:41.995308       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 19:09:44.281972       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 19:09:52.543985       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 19:10:34.144307       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 19:10:34.189645       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [167e9e0789f864655d959c63fd731257c88aa1e1b22515ec35f4a07af4678202] <==
	E1213 19:10:03.979291       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:03.979335       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:03.979363       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:03.979375       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:03.979382       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979733       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979852       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979884       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979949       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979979       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	I1213 19:10:24.001195       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-605114-m03"
	I1213 19:10:24.044627       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-605114-m03"
	I1213 19:10:24.044809       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-605114-m03"
	I1213 19:10:24.081792       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-605114-m03"
	I1213 19:10:24.081903       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-605114-m03"
	I1213 19:10:24.149160       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-605114-m03"
	I1213 19:10:24.149272       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-605114-m03"
	I1213 19:10:24.187394       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-605114-m03"
	I1213 19:10:24.187500       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4kfpv"
	I1213 19:10:24.241495       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4kfpv"
	I1213 19:10:24.241622       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5m48f"
	I1213 19:10:24.284484       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5m48f"
	I1213 19:10:24.284851       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-605114-m03"
	I1213 19:10:24.328812       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-605114-m03"
	I1213 19:15:34.087612       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-wl925"
	
	
	==> kube-controller-manager [7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773] <==
	I1213 19:08:49.567762       1 serving.go:386] Generated self-signed cert in-memory
	I1213 19:08:50.364508       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1213 19:08:50.364608       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:08:50.366449       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 19:08:50.366623       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 19:08:50.366938       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1213 19:08:50.366991       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 19:09:04.386470       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [adb6a0d2cd30435f1f392f09033a5ad40b3f1d3a5a2f1fe0d2ae76a50bf8f3b4] <==
	I1213 19:08:50.244883       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	E1213 19:08:50.246471       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": http2: client connection lost"
	E1213 19:08:54.165411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:08:54.165542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:08:54.165634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:08:54.165741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:08:57.237395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:08:57.237414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:08:57.237660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:08:57.237667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:09:03.989710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:09:03.989962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:09:03.990083       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1213 19:09:03.990245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:09:03.990394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:09:15.029488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:09:15.029488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:09:15.029671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:09:15.029765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:09:18.101424       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1213 19:09:31.797443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:09:31.797538       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1213 19:09:31.797646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:09:34.869405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:09:42.229400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	
	
	==> kube-scheduler [ee202abc8dba3b97ac56d7c3063ce4fae0734134ba47b9d6070588c897f7baf0] <==
	E1213 19:08:02.527700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 19:08:02.527776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 19:08:02.527848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 19:08:02.527900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 19:08:02.527911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 19:08:02.527950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 19:08:02.528002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 19:08:02.528106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 19:08:02.528181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 19:08:02.528340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 19:08:02.528402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 19:08:03.355200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 19:08:03.375752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 19:08:03.384341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 19:08:03.496281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 19:08:03.527514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:08:03.564170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 19:08:03.604860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 19:08:03.609546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 19:08:03.663151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 19:08:03.683755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 19:08:03.838837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 19:08:03.901316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 19:08:03.901563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1213 19:08:06.412915       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 19:09:04 ha-605114 kubelet[806]: I1213 19:09:04.239034     806 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Dec 13 19:09:04 ha-605114 kubelet[806]: E1213 19:09:04.524602     806 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods coredns-66bc5c9577-rc9qg)" podUID="0f2b52ea-d2f2-4307-8a52-619a737c2611" pod="kube-system/coredns-66bc5c9577-rc9qg"
	Dec 13 19:09:04 ha-605114 kubelet[806]: I1213 19:09:04.666266     806 scope.go:117] "RemoveContainer" containerID="38e10b9deae562bcc475d6b257111633953b93aa5e59b05a1a5aaca29705804b"
	Dec 13 19:09:04 ha-605114 kubelet[806]: I1213 19:09:04.666833     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:04 ha-605114 kubelet[806]: E1213 19:09:04.667006     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-605114_kube-system(6b36430ebbfe01869fc54848b2e1c2a9)\"" pod="kube-system/kube-controller-manager-ha-605114" podUID="6b36430ebbfe01869fc54848b2e1c2a9"
	Dec 13 19:09:05 ha-605114 kubelet[806]: E1213 19:09:05.059732     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"re
cursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"ha-605114\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-605114/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:06 ha-605114 kubelet[806]: I1213 19:09:06.894025     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:06 ha-605114 kubelet[806]: E1213 19:09:06.894244     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-605114_kube-system(6b36430ebbfe01869fc54848b2e1c2a9)\"" pod="kube-system/kube-controller-manager-ha-605114" podUID="6b36430ebbfe01869fc54848b2e1c2a9"
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.933737     806 projected.go:196] Error preparing data for projected volume kube-api-access-sctl2 for pod kube-system/storage-provisioner: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.933838     806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2bdd28fc-c3f6-401d-9328-27dc669e196a-kube-api-access-sctl2 podName:2bdd28fc-c3f6-401d-9328-27dc669e196a nodeName:}" failed. No retries permitted until 2025-12-13 19:09:13.933816541 +0000 UTC m=+79.712758196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sctl2" (UniqueName: "kubernetes.io/projected/2bdd28fc-c3f6-401d-9328-27dc669e196a-kube-api-access-sctl2") pod "storage-provisioner" (UID: "2bdd28fc-c3f6-401d-9328-27dc669e196a") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934020     806 projected.go:196] Error preparing data for projected volume kube-api-access-4p9km for pod kube-system/coredns-66bc5c9577-85rpk: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934081     806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d7650f5f-c93c-4824-98ba-c6242f1d9595-kube-api-access-4p9km podName:d7650f5f-c93c-4824-98ba-c6242f1d9595 nodeName:}" failed. No retries permitted until 2025-12-13 19:09:13.934068028 +0000 UTC m=+79.713009674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4p9km" (UniqueName: "kubernetes.io/projected/d7650f5f-c93c-4824-98ba-c6242f1d9595-kube-api-access-4p9km") pod "coredns-66bc5c9577-85rpk" (UID: "d7650f5f-c93c-4824-98ba-c6242f1d9595") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934128     806 projected.go:196] Error preparing data for projected volume kube-api-access-rtb9w for pod default/busybox-7b57f96db7-h5qqv: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934157     806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b89d6cc7-836d-44be-997e-9a7fe221a5d8-kube-api-access-rtb9w podName:b89d6cc7-836d-44be-997e-9a7fe221a5d8 nodeName:}" failed. No retries permitted until 2025-12-13 19:09:13.934149422 +0000 UTC m=+79.713091069 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rtb9w" (UniqueName: "kubernetes.io/projected/b89d6cc7-836d-44be-997e-9a7fe221a5d8-kube-api-access-rtb9w") pod "busybox-7b57f96db7-h5qqv" (UID: "b89d6cc7-836d-44be-997e-9a7fe221a5d8") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:14 ha-605114 kubelet[806]: E1213 19:09:14.239262     806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-605114?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="200ms"
	Dec 13 19:09:15 ha-605114 kubelet[806]: E1213 19:09:15.060662     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-605114\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-605114?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:17 ha-605114 kubelet[806]: I1213 19:09:17.413956     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:17 ha-605114 kubelet[806]: E1213 19:09:17.414150     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-605114_kube-system(6b36430ebbfe01869fc54848b2e1c2a9)\"" pod="kube-system/kube-controller-manager-ha-605114" podUID="6b36430ebbfe01869fc54848b2e1c2a9"
	Dec 13 19:09:19 ha-605114 kubelet[806]: E1213 19:09:19.556378     806 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-605114.1880dbef376d6535  default   2620 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-605114,UID:ha-605114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-605114 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-605114,},FirstTimestamp:2025-12-13 19:07:54 +0000 UTC,LastTimestamp:2025-12-13 19:07:54.517705313 +0000 UTC m=+0.296646960,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-605114,}"
	Dec 13 19:09:24 ha-605114 kubelet[806]: E1213 19:09:24.441298     806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-605114?timeout=10s\": context deadline exceeded" interval="400ms"
	Dec 13 19:09:25 ha-605114 kubelet[806]: E1213 19:09:25.061462     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-605114\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-605114?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:31 ha-605114 kubelet[806]: I1213 19:09:31.414094     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:34 ha-605114 kubelet[806]: E1213 19:09:34.844103     806 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io ha-605114)" interval="800ms"
	Dec 13 19:09:35 ha-605114 kubelet[806]: E1213 19:09:35.061741     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-605114\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-605114?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:39 ha-605114 kubelet[806]: W1213 19:09:39.981430     806 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/crio-1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4 WatchSource:0}: Error finding container 1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4: Status 404 returned error can't find the container with id 1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-605114 -n ha-605114
helpers_test.go:270: (dbg) Run:  kubectl --context ha-605114 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-7b57f96db7-6ldgc busybox-7b57f96db7-jxpf7
helpers_test.go:283: ======> post-mortem[TestMultiControlPlane/serial/RestartCluster]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context ha-605114 describe pod busybox-7b57f96db7-6ldgc busybox-7b57f96db7-jxpf7
helpers_test.go:291: (dbg) kubectl --context ha-605114 describe pod busybox-7b57f96db7-6ldgc busybox-7b57f96db7-jxpf7:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-6ldgc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hsk8c (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hsk8c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  11s   default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	
	
	Name:             busybox-7b57f96db7-jxpf7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-696pr (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-696pr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  1s    default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:294: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (478.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (5.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-605114" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-605114\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-605114\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-605114\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-605114
helpers_test.go:244: (dbg) docker inspect ha-605114:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01",
	        "Created": "2025-12-13T18:58:54.586877202Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 93050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T19:07:47.614428932Z",
	            "FinishedAt": "2025-12-13T19:07:46.864889381Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/hosts",
	        "LogPath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01-json.log",
	        "Name": "/ha-605114",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-605114:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-605114",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01",
	                "LowerDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-605114",
	                "Source": "/var/lib/docker/volumes/ha-605114/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-605114",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-605114",
	                "name.minikube.sigs.k8s.io": "ha-605114",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c9ba4aac7e27f5373688f6fc1a7a905972eca17b43555a3811eba451288f742",
	            "SandboxKey": "/var/run/docker/netns/7c9ba4aac7e2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32833"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32834"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32836"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-605114": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:0b:16:d7:dc:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a2f3617b1da5e979c171e0e32faeb143b6ffd1484ed485ce26cb0c66c2f2f8d4",
	                    "EndpointID": "ad19576bfc7fdb2d25ff186edf415bfaa77021d19f2378c0078a6b8dd2c2a121",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-605114",
	                        "b8b77eca4604"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-605114 -n ha-605114
helpers_test.go:253: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 logs -n 25: (2.184411684s)
helpers_test.go:261: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-605114 cp ha-605114-m03:/home/docker/cp-test.txt ha-605114-m04:/home/docker/cp-test_ha-605114-m03_ha-605114-m04.txt               │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test_ha-605114-m03_ha-605114-m04.txt                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp testdata/cp-test.txt ha-605114-m04:/home/docker/cp-test.txt                                                             │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1407969839/001/cp-test_ha-605114-m04.txt │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114:/home/docker/cp-test_ha-605114-m04_ha-605114.txt                       │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114 sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114.txt                                                 │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114-m02:/home/docker/cp-test_ha-605114-m04_ha-605114-m02.txt               │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m02 sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114-m02.txt                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114-m03:/home/docker/cp-test_ha-605114-m04_ha-605114-m03.txt               │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m03 sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114-m03.txt                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ node    │ ha-605114 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ node    │ ha-605114 node start m02 --alsologtostderr -v 5                                                                                      │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:04 UTC │
	│ node    │ ha-605114 node list --alsologtostderr -v 5                                                                                           │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:04 UTC │                     │
	│ stop    │ ha-605114 stop --alsologtostderr -v 5                                                                                                │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:04 UTC │ 13 Dec 25 19:05 UTC │
	│ start   │ ha-605114 start --wait true --alsologtostderr -v 5                                                                                   │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:05 UTC │ 13 Dec 25 19:06 UTC │
	│ node    │ ha-605114 node list --alsologtostderr -v 5                                                                                           │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:06 UTC │                     │
	│ node    │ ha-605114 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:06 UTC │ 13 Dec 25 19:07 UTC │
	│ stop    │ ha-605114 stop --alsologtostderr -v 5                                                                                                │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:07 UTC │ 13 Dec 25 19:07 UTC │
	│ start   │ ha-605114 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 19:07:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:07:47.349427   92925 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:07:47.349751   92925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:07:47.349782   92925 out.go:374] Setting ErrFile to fd 2...
	I1213 19:07:47.349805   92925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:07:47.350088   92925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:07:47.350503   92925 out.go:368] Setting JSON to false
	I1213 19:07:47.351372   92925 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6620,"bootTime":1765646248,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 19:07:47.351472   92925 start.go:143] virtualization:  
	I1213 19:07:47.357175   92925 out.go:179] * [ha-605114] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 19:07:47.360285   92925 notify.go:221] Checking for updates...
	I1213 19:07:47.363188   92925 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 19:07:47.366066   92925 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:07:47.368997   92925 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:47.371939   92925 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 19:07:47.374564   92925 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 19:07:47.377424   92925 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:07:47.380815   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:47.381472   92925 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 19:07:47.411852   92925 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 19:07:47.411970   92925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:07:47.470115   92925 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-13 19:07:47.460445366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:07:47.470224   92925 docker.go:319] overlay module found
	I1213 19:07:47.473192   92925 out.go:179] * Using the docker driver based on existing profile
	I1213 19:07:47.475964   92925 start.go:309] selected driver: docker
	I1213 19:07:47.475980   92925 start.go:927] validating driver "docker" against &{Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:47.476125   92925 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:07:47.476235   92925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:07:47.532110   92925 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-13 19:07:47.522555398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:07:47.532550   92925 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:07:47.532582   92925 cni.go:84] Creating CNI manager for ""
	I1213 19:07:47.532636   92925 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1213 19:07:47.532689   92925 start.go:353] cluster config:
	{Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:47.537457   92925 out.go:179] * Starting "ha-605114" primary control-plane node in "ha-605114" cluster
	I1213 19:07:47.540151   92925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:07:47.542975   92925 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:07:47.545679   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:47.545731   92925 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 19:07:47.545743   92925 cache.go:65] Caching tarball of preloaded images
	I1213 19:07:47.545753   92925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:07:47.545828   92925 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:07:47.545838   92925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 19:07:47.545971   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:47.565319   92925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:07:47.565343   92925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:07:47.565364   92925 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:07:47.565392   92925 start.go:360] acquireMachinesLock for ha-605114: {Name:mk8d2cbed975abcdd5664438df80622381a361a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:07:47.565456   92925 start.go:364] duration metric: took 41.903µs to acquireMachinesLock for "ha-605114"
	I1213 19:07:47.565477   92925 start.go:96] Skipping create...Using existing machine configuration
	I1213 19:07:47.565483   92925 fix.go:54] fixHost starting: 
	I1213 19:07:47.565741   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:07:47.581688   92925 fix.go:112] recreateIfNeeded on ha-605114: state=Stopped err=<nil>
	W1213 19:07:47.581717   92925 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 19:07:47.584947   92925 out.go:252] * Restarting existing docker container for "ha-605114" ...
	I1213 19:07:47.585046   92925 cli_runner.go:164] Run: docker start ha-605114
	I1213 19:07:47.865372   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:07:47.883933   92925 kic.go:430] container "ha-605114" state is running.
	I1213 19:07:47.884352   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:47.906511   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:47.906746   92925 machine.go:94] provisionDockerMachine start ...
	I1213 19:07:47.906805   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:47.930498   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:47.930829   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:47.930842   92925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:07:47.931376   92925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46728->127.0.0.1:32833: read: connection reset by peer
	I1213 19:07:51.084950   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114
	
	I1213 19:07:51.084978   92925 ubuntu.go:182] provisioning hostname "ha-605114"
	I1213 19:07:51.085064   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.103183   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.103509   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.103523   92925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-605114 && echo "ha-605114" | sudo tee /etc/hostname
	I1213 19:07:51.262962   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114
	
	I1213 19:07:51.263080   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.281758   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.282067   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.282093   92925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-605114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-605114/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-605114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:07:51.433225   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:07:51.433251   92925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:07:51.433276   92925 ubuntu.go:190] setting up certificates
	I1213 19:07:51.433294   92925 provision.go:84] configureAuth start
	I1213 19:07:51.433356   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:51.451056   92925 provision.go:143] copyHostCerts
	I1213 19:07:51.451109   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:51.451157   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:07:51.451169   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:51.451244   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:07:51.451330   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:51.451351   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:07:51.451359   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:51.451387   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:07:51.451438   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:51.451459   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:07:51.451473   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:51.451505   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:07:51.451557   92925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.ha-605114 san=[127.0.0.1 192.168.49.2 ha-605114 localhost minikube]
	I1213 19:07:51.562646   92925 provision.go:177] copyRemoteCerts
	I1213 19:07:51.562709   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:07:51.562753   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.579816   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:51.684734   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 19:07:51.684815   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:07:51.703545   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 19:07:51.703625   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1213 19:07:51.721319   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 19:07:51.721382   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 19:07:51.738806   92925 provision.go:87] duration metric: took 305.496623ms to configureAuth
	I1213 19:07:51.738832   92925 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:07:51.739059   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:51.739152   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.756183   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.756478   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.756493   92925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:07:52.176419   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:07:52.176439   92925 machine.go:97] duration metric: took 4.269683244s to provisionDockerMachine
	I1213 19:07:52.176449   92925 start.go:293] postStartSetup for "ha-605114" (driver="docker")
	I1213 19:07:52.176460   92925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:07:52.176518   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:07:52.176563   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.201857   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.305092   92925 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:07:52.308224   92925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:07:52.308251   92925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:07:52.308263   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:07:52.308316   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:07:52.308413   92925 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:07:52.308423   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 19:07:52.308523   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:07:52.315982   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:07:52.333023   92925 start.go:296] duration metric: took 156.543018ms for postStartSetup
	I1213 19:07:52.333100   92925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:07:52.333150   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.353818   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.454237   92925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:07:52.459167   92925 fix.go:56] duration metric: took 4.893676995s for fixHost
	I1213 19:07:52.459203   92925 start.go:83] releasing machines lock for "ha-605114", held for 4.893726932s
	I1213 19:07:52.459271   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:52.475811   92925 ssh_runner.go:195] Run: cat /version.json
	I1213 19:07:52.475832   92925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:07:52.475868   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.475886   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.494277   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.499565   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.694122   92925 ssh_runner.go:195] Run: systemctl --version
	I1213 19:07:52.700676   92925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:07:52.737939   92925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:07:52.742564   92925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:07:52.742632   92925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:07:52.750413   92925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 19:07:52.750438   92925 start.go:496] detecting cgroup driver to use...
	I1213 19:07:52.750469   92925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:07:52.750516   92925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:07:52.765290   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:07:52.779600   92925 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:07:52.779718   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:07:52.795802   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:07:52.809441   92925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:07:52.921383   92925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:07:53.050247   92925 docker.go:234] disabling docker service ...
	I1213 19:07:53.050357   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:07:53.065412   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:07:53.078985   92925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:07:53.197041   92925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:07:53.312016   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:07:53.324873   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:07:53.338465   92925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:07:53.338566   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.348165   92925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:07:53.348244   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.357334   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.366113   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.375030   92925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:07:53.383092   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.392159   92925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.400500   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.409475   92925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:07:53.416937   92925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:07:53.424427   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:07:53.551020   92925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:07:53.724377   92925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:07:53.724453   92925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:07:53.728412   92925 start.go:564] Will wait 60s for crictl version
	I1213 19:07:53.728528   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:07:53.732393   92925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:07:53.759934   92925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:07:53.760022   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:07:53.792422   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:07:53.826233   92925 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 19:07:53.829188   92925 cli_runner.go:164] Run: docker network inspect ha-605114 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:07:53.845641   92925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:07:53.849708   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:07:53.860398   92925 kubeadm.go:884] updating cluster {Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:07:53.860545   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:53.860602   92925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:07:53.896899   92925 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:07:53.896925   92925 crio.go:433] Images already preloaded, skipping extraction
	I1213 19:07:53.896980   92925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:07:53.927660   92925 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:07:53.927686   92925 cache_images.go:86] Images are preloaded, skipping loading
	I1213 19:07:53.927694   92925 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 19:07:53.927835   92925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-605114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:07:53.927943   92925 ssh_runner.go:195] Run: crio config
	I1213 19:07:53.983293   92925 cni.go:84] Creating CNI manager for ""
	I1213 19:07:53.983320   92925 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1213 19:07:53.983344   92925 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 19:07:53.983367   92925 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-605114 NodeName:ha-605114 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:07:53.983512   92925 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-605114"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:07:53.983533   92925 kube-vip.go:115] generating kube-vip config ...
	I1213 19:07:53.983586   92925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1213 19:07:53.998146   92925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:07:53.998359   92925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 19:07:53.998456   92925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 19:07:54.007466   92925 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:07:54.007601   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1213 19:07:54.016257   92925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1213 19:07:54.030166   92925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:07:54.043943   92925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1213 19:07:54.057568   92925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1213 19:07:54.070913   92925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1213 19:07:54.074912   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:07:54.085321   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:07:54.204815   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:07:54.219656   92925 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114 for IP: 192.168.49.2
	I1213 19:07:54.219678   92925 certs.go:195] generating shared ca certs ...
	I1213 19:07:54.219703   92925 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.219837   92925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:07:54.219890   92925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:07:54.219904   92925 certs.go:257] generating profile certs ...
	I1213 19:07:54.219983   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key
	I1213 19:07:54.220016   92925 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc
	I1213 19:07:54.220035   92925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1213 19:07:54.524208   92925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc ...
	I1213 19:07:54.524279   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc: {Name:mk2a78acb3455aba2154553b94cc00acb06ef2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.524506   92925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc ...
	I1213 19:07:54.524551   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc: {Name:mk04e3ed8a0db9ab16dbffd5c3b9073d491094e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.524690   92925 certs.go:382] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt
	I1213 19:07:54.524872   92925 certs.go:386] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key
	I1213 19:07:54.525075   92925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key
	I1213 19:07:54.525118   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 19:07:54.525152   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 19:07:54.525194   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 19:07:54.525228   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 19:07:54.525260   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 19:07:54.525307   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 19:07:54.525343   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 19:07:54.525371   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 19:07:54.525461   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:07:54.525519   92925 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:07:54.525567   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:07:54.525619   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:07:54.525684   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:07:54.525769   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:07:54.525903   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:07:54.525966   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.526009   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.526041   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.526676   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:07:54.547219   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:07:54.566530   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:07:54.584290   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:07:54.601920   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 19:07:54.619619   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:07:54.637359   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:07:54.654838   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:07:54.674423   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:07:54.692475   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:07:54.711269   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:07:54.730584   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:07:54.744548   92925 ssh_runner.go:195] Run: openssl version
	I1213 19:07:54.750950   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.759097   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:07:54.766678   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.770469   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.770573   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.811925   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:07:54.820248   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.829596   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:07:54.843944   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.848466   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.848527   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.910394   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:07:54.922018   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.934942   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:07:54.943147   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.953686   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.953799   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:07:55.020871   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:07:55.034570   92925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:07:55.045312   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 19:07:55.146347   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 19:07:55.197938   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 19:07:55.240888   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 19:07:55.293579   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 19:07:55.349397   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 19:07:55.405749   92925 kubeadm.go:401] StartCluster: {Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:55.405941   92925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:07:55.406039   92925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:07:55.476432   92925 cri.go:89] found id: "23b44f60db0dc9ad888430163cce4adc2cef45e4fff10aded1fd37e36e5d5955"
	I1213 19:07:55.476492   92925 cri.go:89] found id: "9a81ddd488bb7e9ca9d20cc8af4e9414463f3bf2bd40edd26c2e9395f731a3ec"
	I1213 19:07:55.476519   92925 cri.go:89] found id: "ee202abc8dba3b97ac56d7c3063ce4fae0734134ba47b9d6070588c897f7baf0"
	I1213 19:07:55.476536   92925 cri.go:89] found id: "3c729bb1538bfb45bc9b5542f5524916c96b118344d2be8a42e58a0bc6d4cb0d"
	I1213 19:07:55.476570   92925 cri.go:89] found id: "2b3744a5aa7a90a9d9036f0de528d8ed7e951f80254fa43fd57f666e0a6ccc86"
	I1213 19:07:55.476591   92925 cri.go:89] found id: ""
	I1213 19:07:55.476674   92925 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 19:07:55.502827   92925 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T19:07:55Z" level=error msg="open /run/runc: no such file or directory"
	I1213 19:07:55.502965   92925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:07:55.514772   92925 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 19:07:55.514841   92925 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 19:07:55.514932   92925 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 19:07:55.530907   92925 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:07:55.531414   92925 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-605114" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:55.531569   92925 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-2686/kubeconfig needs updating (will repair): [kubeconfig missing "ha-605114" cluster setting kubeconfig missing "ha-605114" context setting]
	I1213 19:07:55.531908   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.532529   92925 kapi.go:59] client config for ha-605114: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 19:07:55.533545   92925 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 19:07:55.533623   92925 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 19:07:55.533709   92925 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 19:07:55.533743   92925 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 19:07:55.533762   92925 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 19:07:55.533784   92925 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 19:07:55.534156   92925 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 19:07:55.550155   92925 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 19:07:55.550227   92925 kubeadm.go:602] duration metric: took 35.349185ms to restartPrimaryControlPlane
	I1213 19:07:55.550251   92925 kubeadm.go:403] duration metric: took 144.511847ms to StartCluster
	I1213 19:07:55.550281   92925 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.550405   92925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:55.551146   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.551412   92925 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:07:55.551467   92925 start.go:242] waiting for startup goroutines ...
	I1213 19:07:55.551494   92925 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 19:07:55.552092   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:55.557393   92925 out.go:179] * Enabled addons: 
	I1213 19:07:55.560282   92925 addons.go:530] duration metric: took 8.786078ms for enable addons: enabled=[]
	I1213 19:07:55.560370   92925 start.go:247] waiting for cluster config update ...
	I1213 19:07:55.560416   92925 start.go:256] writing updated cluster config ...
	I1213 19:07:55.563604   92925 out.go:203] 
	I1213 19:07:55.566673   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:55.566871   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:55.570151   92925 out.go:179] * Starting "ha-605114-m02" control-plane node in "ha-605114" cluster
	I1213 19:07:55.572987   92925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:07:55.575841   92925 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:07:55.578800   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:55.578823   92925 cache.go:65] Caching tarball of preloaded images
	I1213 19:07:55.578933   92925 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:07:55.578943   92925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 19:07:55.579063   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:55.579269   92925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:07:55.599207   92925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:07:55.599233   92925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:07:55.599247   92925 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:07:55.599269   92925 start.go:360] acquireMachinesLock for ha-605114-m02: {Name:mk43db0c2b2ac44e0e8dc9a68aa6922f0bb2fccb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:07:55.599325   92925 start.go:364] duration metric: took 36.989µs to acquireMachinesLock for "ha-605114-m02"
	I1213 19:07:55.599348   92925 start.go:96] Skipping create...Using existing machine configuration
	I1213 19:07:55.599358   92925 fix.go:54] fixHost starting: m02
	I1213 19:07:55.599613   92925 cli_runner.go:164] Run: docker container inspect ha-605114-m02 --format={{.State.Status}}
	I1213 19:07:55.630999   92925 fix.go:112] recreateIfNeeded on ha-605114-m02: state=Stopped err=<nil>
	W1213 19:07:55.631030   92925 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 19:07:55.634239   92925 out.go:252] * Restarting existing docker container for "ha-605114-m02" ...
	I1213 19:07:55.634323   92925 cli_runner.go:164] Run: docker start ha-605114-m02
	I1213 19:07:56.013613   92925 cli_runner.go:164] Run: docker container inspect ha-605114-m02 --format={{.State.Status}}
	I1213 19:07:56.043229   92925 kic.go:430] container "ha-605114-m02" state is running.
	I1213 19:07:56.043952   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:07:56.072863   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:56.073198   92925 machine.go:94] provisionDockerMachine start ...
	I1213 19:07:56.073260   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:56.107315   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:56.107694   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:56.107711   92925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:07:56.108441   92925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 19:07:59.320519   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114-m02
	
	I1213 19:07:59.320540   92925 ubuntu.go:182] provisioning hostname "ha-605114-m02"
	I1213 19:07:59.320600   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.354148   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:59.354465   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:59.354476   92925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-605114-m02 && echo "ha-605114-m02" | sudo tee /etc/hostname
	I1213 19:07:59.560753   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114-m02
	
	I1213 19:07:59.560835   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.590681   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:59.590982   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:59.590997   92925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-605114-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-605114-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-605114-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:07:59.777428   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:07:59.777502   92925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:07:59.777532   92925 ubuntu.go:190] setting up certificates
	I1213 19:07:59.777573   92925 provision.go:84] configureAuth start
	I1213 19:07:59.777669   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:07:59.806547   92925 provision.go:143] copyHostCerts
	I1213 19:07:59.806589   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:59.806621   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:07:59.806628   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:59.806709   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:07:59.806788   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:59.806805   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:07:59.806810   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:59.806854   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:07:59.806898   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:59.806916   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:07:59.806920   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:59.806944   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:07:59.806989   92925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.ha-605114-m02 san=[127.0.0.1 192.168.49.3 ha-605114-m02 localhost minikube]
	I1213 19:07:59.961185   92925 provision.go:177] copyRemoteCerts
	I1213 19:07:59.961261   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:07:59.961306   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.986810   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:00.131955   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 19:08:00.132032   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:08:00.173539   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 19:08:00.173623   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 19:08:00.207894   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 19:08:00.207965   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 19:08:00.244666   92925 provision.go:87] duration metric: took 467.054938ms to configureAuth
	I1213 19:08:00.244712   92925 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:08:00.245918   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:08:00.246082   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:00.327171   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:08:00.327492   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:08:00.327508   92925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:08:01.970074   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:08:01.970150   92925 machine.go:97] duration metric: took 5.896940025s to provisionDockerMachine
	I1213 19:08:01.970177   92925 start.go:293] postStartSetup for "ha-605114-m02" (driver="docker")
	I1213 19:08:01.970221   92925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:08:01.970316   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:08:01.970411   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.009089   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.129494   92925 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:08:02.136549   92925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:08:02.136573   92925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:08:02.136585   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:08:02.136646   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:08:02.136728   92925 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:08:02.136734   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 19:08:02.136842   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:08:02.171248   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:08:02.216469   92925 start.go:296] duration metric: took 246.261152ms for postStartSetup
	I1213 19:08:02.216625   92925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:08:02.216685   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.262639   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.374718   92925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:08:02.380084   92925 fix.go:56] duration metric: took 6.780718951s for fixHost
	I1213 19:08:02.380108   92925 start.go:83] releasing machines lock for "ha-605114-m02", held for 6.780770726s
	I1213 19:08:02.380176   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:08:02.401071   92925 out.go:179] * Found network options:
	I1213 19:08:02.404164   92925 out.go:179]   - NO_PROXY=192.168.49.2
	W1213 19:08:02.407079   92925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1213 19:08:02.407127   92925 proxy.go:120] fail to check proxy env: Error ip not in block
	I1213 19:08:02.407198   92925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:08:02.407241   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.407257   92925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:08:02.407313   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.441677   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.462715   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.700903   92925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:08:02.788606   92925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:08:02.788680   92925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:08:02.802406   92925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 19:08:02.802471   92925 start.go:496] detecting cgroup driver to use...
	I1213 19:08:02.802520   92925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:08:02.802599   92925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:08:02.821557   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:08:02.843971   92925 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:08:02.844081   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:08:02.866953   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:08:02.884909   92925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:08:03.137948   92925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:08:03.363884   92925 docker.go:234] disabling docker service ...
	I1213 19:08:03.363990   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:08:03.388880   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:08:03.405597   92925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:08:03.645933   92925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:08:03.919704   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:08:03.941774   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:08:03.972913   92925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:08:03.973103   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:03.988083   92925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:08:03.988256   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.019667   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.031645   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.049709   92925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:08:04.086713   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.109181   92925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.119963   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.154436   92925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:08:04.170086   92925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:08:04.191001   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:08:04.484381   92925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:09:34.781930   92925 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.297515083s)
	I1213 19:09:34.781956   92925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:09:34.782006   92925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:09:34.785743   92925 start.go:564] Will wait 60s for crictl version
	I1213 19:09:34.785812   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:09:34.789353   92925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:09:34.818524   92925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:09:34.818612   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:09:34.852441   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:09:34.887257   92925 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 19:09:34.890293   92925 out.go:179]   - env NO_PROXY=192.168.49.2
	I1213 19:09:34.893426   92925 cli_runner.go:164] Run: docker network inspect ha-605114 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:09:34.911684   92925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:09:34.915601   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:09:34.925402   92925 mustload.go:66] Loading cluster: ha-605114
	I1213 19:09:34.925637   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:09:34.925900   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:09:34.944458   92925 host.go:66] Checking if "ha-605114" exists ...
	I1213 19:09:34.944731   92925 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114 for IP: 192.168.49.3
	I1213 19:09:34.944745   92925 certs.go:195] generating shared ca certs ...
	I1213 19:09:34.944760   92925 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:09:34.944889   92925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:09:34.944944   92925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:09:34.944957   92925 certs.go:257] generating profile certs ...
	I1213 19:09:34.945069   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key
	I1213 19:09:34.945157   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.29c07aea
	I1213 19:09:34.945202   92925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key
	I1213 19:09:34.945215   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 19:09:34.945230   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 19:09:34.945254   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 19:09:34.945266   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 19:09:34.945281   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 19:09:34.945294   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 19:09:34.945309   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 19:09:34.945328   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 19:09:34.945383   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:09:34.945424   92925 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:09:34.945446   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:09:34.945479   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:09:34.945508   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:09:34.945538   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:09:34.945583   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:09:34.945616   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:34.945632   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 19:09:34.945649   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 19:09:34.945719   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:09:34.963328   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:09:35.065324   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1213 19:09:35.069081   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1213 19:09:35.077819   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1213 19:09:35.081455   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1213 19:09:35.089763   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1213 19:09:35.093612   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1213 19:09:35.102260   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1213 19:09:35.106728   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1213 19:09:35.115519   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1213 19:09:35.119196   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1213 19:09:35.129001   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1213 19:09:35.132624   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1213 19:09:35.141653   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:09:35.161897   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:09:35.182131   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:09:35.202060   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:09:35.222310   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 19:09:35.243497   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:09:35.265517   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:09:35.284987   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:09:35.302971   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:09:35.320388   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:09:35.338865   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:09:35.356332   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1213 19:09:35.369616   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1213 19:09:35.383108   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1213 19:09:35.396928   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1213 19:09:35.410529   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1213 19:09:35.423162   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1213 19:09:35.436667   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1213 19:09:35.450451   92925 ssh_runner.go:195] Run: openssl version
	I1213 19:09:35.457142   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.464516   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:09:35.472169   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.475920   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.475984   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.516956   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:09:35.524426   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.532136   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:09:35.539767   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.543798   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.543906   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.586837   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:09:35.594791   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.602550   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:09:35.610984   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.614895   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.614973   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.661484   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:09:35.668847   92925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:09:35.672924   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 19:09:35.714926   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 19:09:35.757278   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 19:09:35.798060   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 19:09:35.840340   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 19:09:35.883228   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 19:09:35.926498   92925 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1213 19:09:35.926597   92925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-605114-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:09:35.926628   92925 kube-vip.go:115] generating kube-vip config ...
	I1213 19:09:35.926680   92925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1213 19:09:35.939407   92925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:09:35.939464   92925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 19:09:35.939538   92925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 19:09:35.948342   92925 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:09:35.948446   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1213 19:09:35.956523   92925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 19:09:35.970227   92925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:09:35.985384   92925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1213 19:09:36.004385   92925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1213 19:09:36.008483   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:09:36.019218   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:09:36.155982   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:09:36.170330   92925 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:09:36.170793   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:09:36.174251   92925 out.go:179] * Verifying Kubernetes components...
	I1213 19:09:36.177213   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:09:36.319740   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:09:36.334811   92925 kapi.go:59] client config for ha-605114: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1213 19:09:36.334886   92925 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1213 19:09:36.335095   92925 node_ready.go:35] waiting up to 6m0s for node "ha-605114-m02" to be "Ready" ...
	I1213 19:09:39.281934   92925 node_ready.go:49] node "ha-605114-m02" is "Ready"
	I1213 19:09:39.281962   92925 node_ready.go:38] duration metric: took 2.946847766s for node "ha-605114-m02" to be "Ready" ...
	I1213 19:09:39.281975   92925 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:09:39.282034   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:39.782149   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:40.282856   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:40.782144   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:41.282958   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:41.782581   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:42.282264   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:42.782257   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:43.283132   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:43.782112   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:44.282168   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:44.782088   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:45.282593   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:45.782122   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:46.282927   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:46.782182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:47.282980   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:47.783112   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:48.282633   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:48.782211   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:49.282732   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:49.782187   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:50.282735   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:50.782142   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:51.282519   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:51.782152   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:52.282197   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:52.782636   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:53.282768   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:53.782116   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:54.282300   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:54.782182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:55.282883   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:55.783092   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:56.282203   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:56.783098   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:57.282717   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:57.782189   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:58.282252   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:58.782909   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:59.282100   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:59.782310   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:00.289145   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:00.782212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:01.282192   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:01.782760   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:02.282108   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:02.782972   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:03.282353   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:03.782328   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:04.282366   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:04.782174   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:05.282835   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:05.782488   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:06.283036   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:06.782436   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:07.282292   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:07.782212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:08.283033   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:08.783070   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:09.282897   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:09.782668   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:10.282222   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:10.782267   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:11.282198   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:11.782837   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:12.282212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:12.783009   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:13.282406   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:13.782556   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:14.283140   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:14.782783   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:15.283077   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:15.783150   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:16.282934   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:16.783092   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:17.282186   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:17.782253   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:18.282771   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:18.782339   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:19.282255   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:19.782254   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:20.282346   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:20.782992   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:21.282270   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:21.782169   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:22.282176   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:22.782681   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:23.282402   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:23.783116   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:24.282118   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:24.782962   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:25.283031   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:25.783024   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:26.283105   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:26.782110   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:27.282833   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:27.782332   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:28.282978   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:28.782284   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:29.283095   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:29.782866   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:30.282438   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:30.782580   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:31.282697   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:31.783148   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:32.283119   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:32.782971   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:33.282108   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:33.783088   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:34.283075   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:34.782667   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:35.282868   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:35.782514   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:36.282200   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:36.282308   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:36.311092   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:36.311117   92925 cri.go:89] found id: ""
	I1213 19:10:36.311125   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:36.311180   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.314888   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:36.314970   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:36.342553   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:36.342573   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:36.342578   92925 cri.go:89] found id: ""
	I1213 19:10:36.342586   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:36.342655   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.346486   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.349986   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:36.350061   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:36.375198   92925 cri.go:89] found id: ""
	I1213 19:10:36.375262   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.375275   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:36.375281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:36.375350   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:36.406767   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:36.406789   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:36.406794   92925 cri.go:89] found id: ""
	I1213 19:10:36.406801   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:36.406857   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.410743   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.414390   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:36.414490   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:36.441810   92925 cri.go:89] found id: ""
	I1213 19:10:36.441833   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.441841   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:36.441848   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:36.441911   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:36.468354   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:36.468374   92925 cri.go:89] found id: ""
	I1213 19:10:36.468382   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:36.468436   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.472238   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:36.472316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:36.500356   92925 cri.go:89] found id: ""
	I1213 19:10:36.500383   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.500394   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:36.500404   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:36.500414   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:36.593811   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:36.593845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:36.607625   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:36.607656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:37.031907   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:37.023726    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.024402    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.025999    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.026604    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.028296    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:37.023726    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.024402    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.025999    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.026604    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.028296    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:37.031933   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:37.031948   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:37.057050   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:37.057079   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:37.097228   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:37.097262   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:37.148963   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:37.149014   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:37.217399   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:37.217436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:37.248174   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:37.248203   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:37.274722   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:37.274748   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:37.355342   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:37.355379   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:39.885413   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:39.896181   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:39.896250   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:39.928054   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:39.928078   92925 cri.go:89] found id: ""
	I1213 19:10:39.928087   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:39.928142   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.932690   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:39.932760   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:39.962089   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:39.962110   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:39.962114   92925 cri.go:89] found id: ""
	I1213 19:10:39.962122   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:39.962178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.966008   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.970141   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:39.970211   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:40.031915   92925 cri.go:89] found id: ""
	I1213 19:10:40.031938   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.031947   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:40.031954   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:40.032022   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:40.075124   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:40.075145   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:40.075150   92925 cri.go:89] found id: ""
	I1213 19:10:40.075157   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:40.075216   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.079588   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.083956   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:40.084077   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:40.120592   92925 cri.go:89] found id: ""
	I1213 19:10:40.120623   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.120633   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:40.120640   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:40.120707   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:40.162573   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:40.162599   92925 cri.go:89] found id: ""
	I1213 19:10:40.162620   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:40.162692   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.167731   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:40.167810   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:40.197646   92925 cri.go:89] found id: ""
	I1213 19:10:40.197681   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.197692   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:40.197701   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:40.197714   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:40.279428   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:40.270096    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.270945    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.271678    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.273521    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.274072    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:40.270096    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.270945    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.271678    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.273521    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.274072    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:40.279462   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:40.279476   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:40.317833   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:40.317867   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:40.365303   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:40.365339   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:40.391972   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:40.392006   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:40.467785   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:40.467824   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:40.499555   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:40.499587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:40.601537   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:40.601571   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:40.614326   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:40.614357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:40.643794   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:40.643823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:40.696205   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:40.696242   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.224045   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:43.234786   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:43.234854   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:43.262459   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:43.262481   92925 cri.go:89] found id: ""
	I1213 19:10:43.262489   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:43.262544   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.267289   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:43.267362   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:43.294825   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:43.294846   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:43.294858   92925 cri.go:89] found id: ""
	I1213 19:10:43.294873   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:43.294931   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.298717   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.302500   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:43.302576   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:43.328978   92925 cri.go:89] found id: ""
	I1213 19:10:43.329001   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.329048   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:43.329055   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:43.329115   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:43.358394   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:43.358419   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.358426   92925 cri.go:89] found id: ""
	I1213 19:10:43.358434   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:43.358544   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.363176   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.366906   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:43.366996   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:43.396556   92925 cri.go:89] found id: ""
	I1213 19:10:43.396583   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.396592   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:43.396598   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:43.396657   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:43.422776   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:43.422803   92925 cri.go:89] found id: ""
	I1213 19:10:43.422813   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:43.422886   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.426512   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:43.426579   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:43.452942   92925 cri.go:89] found id: ""
	I1213 19:10:43.452966   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.452975   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:43.452984   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:43.452996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:43.479637   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:43.479708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:43.492492   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:43.492521   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:43.555898   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:43.555930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.583059   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:43.583089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:43.665528   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:43.665562   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:43.713108   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:43.713136   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:43.817894   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:43.817930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:43.900953   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:43.892916    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.893797    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895356    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895650    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.897247    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:43.892916    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.893797    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895356    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895650    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.897247    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:43.900978   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:43.900992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:43.928040   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:43.928067   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:43.989295   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:43.989349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:46.551759   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:46.562922   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:46.562999   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:46.590576   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:46.590607   92925 cri.go:89] found id: ""
	I1213 19:10:46.590615   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:46.590669   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.594481   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:46.594557   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:46.619444   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:46.619466   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:46.619472   92925 cri.go:89] found id: ""
	I1213 19:10:46.619480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:46.619562   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.623350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.626652   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:46.626726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:46.655019   92925 cri.go:89] found id: ""
	I1213 19:10:46.655045   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.655055   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:46.655061   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:46.655119   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:46.685081   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:46.685108   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:46.685113   92925 cri.go:89] found id: ""
	I1213 19:10:46.685121   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:46.685178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.689664   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.693381   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:46.693455   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:46.719871   92925 cri.go:89] found id: ""
	I1213 19:10:46.719897   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.719906   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:46.719914   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:46.719979   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:46.747153   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:46.747176   92925 cri.go:89] found id: ""
	I1213 19:10:46.747184   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:46.747239   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.751093   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:46.751198   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:46.777729   92925 cri.go:89] found id: ""
	I1213 19:10:46.777800   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.777816   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:46.777827   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:46.777840   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:46.807286   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:46.807315   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:46.900226   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:46.900266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:46.913850   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:46.913877   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:46.995097   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:46.986432    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.987537    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.988185    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.989944    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.990430    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:46.986432    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.987537    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.988185    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.989944    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.990430    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:46.995121   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:46.995146   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:47.020980   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:47.021038   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:47.062312   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:47.062348   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:47.143840   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:47.143916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:47.176420   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:47.176455   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:47.221958   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:47.222003   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:47.276308   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:47.276349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:49.804769   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:49.815535   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:49.815609   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:49.841153   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:49.841227   92925 cri.go:89] found id: ""
	I1213 19:10:49.841258   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:49.841341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.844798   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:49.844903   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:49.872086   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:49.872111   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:49.872117   92925 cri.go:89] found id: ""
	I1213 19:10:49.872124   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:49.872178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.875975   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.879817   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:49.879892   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:49.918961   92925 cri.go:89] found id: ""
	I1213 19:10:49.918987   92925 logs.go:282] 0 containers: []
	W1213 19:10:49.918996   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:49.919002   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:49.919059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:49.959969   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:49.959994   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:49.959999   92925 cri.go:89] found id: ""
	I1213 19:10:49.960007   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:49.960063   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.964635   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.969140   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:49.969208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:50.006023   92925 cri.go:89] found id: ""
	I1213 19:10:50.006049   92925 logs.go:282] 0 containers: []
	W1213 19:10:50.006058   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:50.006064   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:50.006143   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:50.040945   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:50.040965   92925 cri.go:89] found id: ""
	I1213 19:10:50.040973   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:50.041060   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:50.044991   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:50.045100   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:50.073352   92925 cri.go:89] found id: ""
	I1213 19:10:50.073383   92925 logs.go:282] 0 containers: []
	W1213 19:10:50.073409   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:50.073420   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:50.073437   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:50.092169   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:50.092219   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:50.167681   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:50.167719   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:50.220989   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:50.221028   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:50.252059   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:50.252091   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:50.358508   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:50.358555   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:50.434424   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:50.426219    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.426850    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.428449    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.429020    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.430880    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:50.426219    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.426850    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.428449    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.429020    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.430880    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:50.434452   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:50.434467   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:50.458963   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:50.458992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:50.516376   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:50.516410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:50.543978   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:50.544009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:50.619429   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:50.619468   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:53.153421   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:53.163979   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:53.164048   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:53.191198   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:53.191259   92925 cri.go:89] found id: ""
	I1213 19:10:53.191291   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:53.191363   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.195132   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:53.195204   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:53.222253   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:53.222276   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:53.222280   92925 cri.go:89] found id: ""
	I1213 19:10:53.222287   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:53.222370   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.226176   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.229762   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:53.229878   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:53.260062   92925 cri.go:89] found id: ""
	I1213 19:10:53.260088   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.260096   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:53.260103   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:53.260159   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:53.289940   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:53.290005   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:53.290024   92925 cri.go:89] found id: ""
	I1213 19:10:53.290037   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:53.290106   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.293745   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.297116   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:53.297199   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:53.324233   92925 cri.go:89] found id: ""
	I1213 19:10:53.324259   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.324268   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:53.324274   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:53.324329   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:53.355230   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:53.355252   92925 cri.go:89] found id: ""
	I1213 19:10:53.355260   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:53.355312   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.358865   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:53.358932   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:53.388377   92925 cri.go:89] found id: ""
	I1213 19:10:53.388460   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.388486   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:53.388531   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:53.388561   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:53.482197   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:53.482233   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:53.495635   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:53.495666   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:53.527174   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:53.527201   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:53.568473   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:53.568509   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:53.613038   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:53.613068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:53.666213   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:53.666248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:53.746993   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:53.747031   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:53.777726   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:53.777758   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:53.849162   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:53.840835    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.841725    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.842564    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844081    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844396    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:53.840835    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.841725    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.842564    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844081    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844396    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:53.849193   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:53.849207   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:53.879522   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:53.879551   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.408599   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:56.420063   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:56.420130   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:56.446598   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:56.446622   92925 cri.go:89] found id: ""
	I1213 19:10:56.446630   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:56.446691   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.450451   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:56.450519   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:56.477437   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:56.477460   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:56.477465   92925 cri.go:89] found id: ""
	I1213 19:10:56.477472   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:56.477560   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.481341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.484891   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:56.484963   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:56.513437   92925 cri.go:89] found id: ""
	I1213 19:10:56.513459   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.513467   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:56.513473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:56.513531   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:56.542772   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:56.542812   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:56.542818   92925 cri.go:89] found id: ""
	I1213 19:10:56.542845   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:56.542930   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.546773   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.550355   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:56.550430   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:56.577663   92925 cri.go:89] found id: ""
	I1213 19:10:56.577687   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.577695   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:56.577701   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:56.577811   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:56.604755   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.604827   92925 cri.go:89] found id: ""
	I1213 19:10:56.604849   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:56.604945   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.608549   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:56.608618   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:56.635735   92925 cri.go:89] found id: ""
	I1213 19:10:56.635759   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.635767   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:56.635777   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:56.635789   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:56.729353   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:56.729388   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:56.741845   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:56.741874   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:56.815151   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:56.806729    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.807450    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.808916    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.809436    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.811611    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:56.806729    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.807450    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.808916    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.809436    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.811611    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:56.815178   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:56.815193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:56.871711   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:56.871748   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.904003   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:56.904034   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:56.941519   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:56.941549   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:56.974994   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:56.975022   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:57.015259   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:57.015290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:57.059492   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:57.059527   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:57.085661   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:57.085690   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:59.675412   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:59.686117   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:59.686192   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:59.710921   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:59.710951   92925 cri.go:89] found id: ""
	I1213 19:10:59.710960   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:59.711015   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.714894   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:59.715008   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:59.742170   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:59.742193   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:59.742199   92925 cri.go:89] found id: ""
	I1213 19:10:59.742206   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:59.742261   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.746138   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.750866   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:59.750942   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:59.777917   92925 cri.go:89] found id: ""
	I1213 19:10:59.777943   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.777951   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:59.777957   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:59.778015   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:59.803883   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:59.803903   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:59.803908   92925 cri.go:89] found id: ""
	I1213 19:10:59.803916   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:59.803971   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.807903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.811388   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:59.811453   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:59.837952   92925 cri.go:89] found id: ""
	I1213 19:10:59.837977   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.837986   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:59.837992   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:59.838048   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:59.864431   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:59.864490   92925 cri.go:89] found id: ""
	I1213 19:10:59.864512   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:59.864594   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.869272   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:59.869345   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:59.896571   92925 cri.go:89] found id: ""
	I1213 19:10:59.896603   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.896612   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:59.896622   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:59.896634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:59.997222   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:59.997313   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:00.122051   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:00.122166   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:00.334228   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:00.323858    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.324625    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326029    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326896    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.328835    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:00.323858    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.324625    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326029    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326896    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.328835    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:00.334270   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:00.334284   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:00.397345   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:00.397381   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:00.460082   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:00.460118   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:00.507030   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:00.507068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:00.561579   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:00.561611   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:00.590319   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:00.590346   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:00.618590   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:00.618617   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:00.700620   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:00.700655   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:03.247538   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:03.260650   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:03.260720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:03.296710   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:03.296736   92925 cri.go:89] found id: ""
	I1213 19:11:03.296744   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:03.296804   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.300974   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:03.301083   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:03.332989   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:03.333019   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:03.333024   92925 cri.go:89] found id: ""
	I1213 19:11:03.333031   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:03.333085   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.337959   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.341569   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:03.341642   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:03.367805   92925 cri.go:89] found id: ""
	I1213 19:11:03.367831   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.367840   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:03.367847   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:03.367910   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:03.396144   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:03.396165   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:03.396170   92925 cri.go:89] found id: ""
	I1213 19:11:03.396177   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:03.396234   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.400643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.404350   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:03.404422   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:03.431472   92925 cri.go:89] found id: ""
	I1213 19:11:03.431498   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.431508   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:03.431520   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:03.431602   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:03.459968   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:03.460034   92925 cri.go:89] found id: ""
	I1213 19:11:03.460058   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:03.460134   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.464138   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:03.464230   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:03.491871   92925 cri.go:89] found id: ""
	I1213 19:11:03.491897   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.491906   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:03.491916   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:03.491928   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:03.528376   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:03.528451   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:03.562095   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:03.562124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:03.575381   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:03.575410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:03.602586   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:03.602615   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:03.651880   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:03.651912   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:03.708104   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:03.708142   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:03.736240   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:03.736268   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:03.814277   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:03.814314   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:03.920505   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:03.920542   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:04.025281   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:04.014467    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.015603    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.016913    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.017960    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.019083    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:04.014467    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.015603    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.016913    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.017960    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.019083    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:04.025308   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:04.025326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.584492   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:06.595822   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:06.595900   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:06.627891   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:06.627917   92925 cri.go:89] found id: ""
	I1213 19:11:06.627925   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:06.627982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.632107   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:06.632184   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:06.657896   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:06.657921   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.657926   92925 cri.go:89] found id: ""
	I1213 19:11:06.657934   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:06.657989   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.661493   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.665545   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:06.665611   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:06.696673   92925 cri.go:89] found id: ""
	I1213 19:11:06.696748   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.696773   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:06.696792   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:06.696879   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:06.724330   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:06.724355   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:06.724360   92925 cri.go:89] found id: ""
	I1213 19:11:06.724368   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:06.724422   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.728040   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.731506   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:06.731610   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:06.756515   92925 cri.go:89] found id: ""
	I1213 19:11:06.756578   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.756601   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:06.756622   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:06.756700   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:06.783035   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:06.783094   92925 cri.go:89] found id: ""
	I1213 19:11:06.783117   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:06.783184   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.787082   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:06.787158   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:06.813991   92925 cri.go:89] found id: ""
	I1213 19:11:06.814014   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.814022   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:06.814031   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:06.814043   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.860023   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:06.860057   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:06.915266   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:06.915303   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:07.005436   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:07.005480   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:07.041558   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:07.041591   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:07.055111   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:07.055140   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:07.085506   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:07.085534   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:07.140042   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:07.140080   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:07.170267   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:07.170300   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:07.197645   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:07.197676   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:07.298125   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:07.298167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:07.368495   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:07.358879    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.359581    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361161    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361458    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.363677    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:07.358879    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.359581    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361161    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361458    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.363677    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:09.868760   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:09.879760   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:09.879831   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:09.907241   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:09.907264   92925 cri.go:89] found id: ""
	I1213 19:11:09.907272   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:09.907331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.910883   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:09.910954   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:09.936137   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:09.936156   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:09.936161   92925 cri.go:89] found id: ""
	I1213 19:11:09.936167   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:09.936222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.940048   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.951154   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:09.951222   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:09.985435   92925 cri.go:89] found id: ""
	I1213 19:11:09.985520   92925 logs.go:282] 0 containers: []
	W1213 19:11:09.985532   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:09.985540   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:09.985648   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:10.028412   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:10.028487   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:10.028521   92925 cri.go:89] found id: ""
	I1213 19:11:10.028549   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:10.028643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.035436   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.040716   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:10.040834   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:10.070216   92925 cri.go:89] found id: ""
	I1213 19:11:10.070245   92925 logs.go:282] 0 containers: []
	W1213 19:11:10.070255   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:10.070261   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:10.070323   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:10.107151   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:10.107174   92925 cri.go:89] found id: ""
	I1213 19:11:10.107183   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:10.107241   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.111700   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:10.111773   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:10.148889   92925 cri.go:89] found id: ""
	I1213 19:11:10.148913   92925 logs.go:282] 0 containers: []
	W1213 19:11:10.148922   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:10.148931   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:10.148946   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:10.183850   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:10.183953   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:10.284535   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:10.284572   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:10.361456   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:10.353378    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.354229    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.355719    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.356209    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.357653    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:10.353378    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.354229    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.355719    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.356209    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.357653    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:10.361521   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:10.361543   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:10.401195   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:10.401230   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:10.466771   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:10.466806   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:10.492988   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:10.493041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:10.506114   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:10.506143   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:10.534614   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:10.534643   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:10.589313   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:10.589346   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:10.621617   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:10.621646   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:13.202940   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:13.214007   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:13.214076   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:13.241311   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:13.241334   92925 cri.go:89] found id: ""
	I1213 19:11:13.241342   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:13.241399   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.244857   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:13.244973   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:13.271246   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:13.271272   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:13.271277   92925 cri.go:89] found id: ""
	I1213 19:11:13.271284   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:13.271368   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.275204   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.278868   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:13.278941   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:13.306334   92925 cri.go:89] found id: ""
	I1213 19:11:13.306365   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.306373   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:13.306379   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:13.306440   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:13.332388   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:13.332407   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:13.332412   92925 cri.go:89] found id: ""
	I1213 19:11:13.332419   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:13.332474   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.336618   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.340235   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:13.340305   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:13.366487   92925 cri.go:89] found id: ""
	I1213 19:11:13.366522   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.366531   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:13.366537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:13.366597   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:13.397475   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:13.397496   92925 cri.go:89] found id: ""
	I1213 19:11:13.397504   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:13.397565   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.401266   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:13.401377   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:13.430168   92925 cri.go:89] found id: ""
	I1213 19:11:13.430196   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.430205   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:13.430221   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:13.430235   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:13.496086   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:13.486609    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.487472    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489304    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489961    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.491916    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:13.486609    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.487472    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489304    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489961    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.491916    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:13.496111   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:13.496124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:13.548378   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:13.548413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:13.601861   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:13.601899   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:13.634165   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:13.634193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:13.662242   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:13.662270   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:13.737810   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:13.737846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:13.770540   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:13.770574   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:13.783830   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:13.783907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:13.810122   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:13.810149   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:13.856452   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:13.856485   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:16.448594   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:16.459829   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:16.459900   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:16.489717   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:16.489737   92925 cri.go:89] found id: ""
	I1213 19:11:16.489745   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:16.489799   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.494205   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:16.494290   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:16.529314   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:16.529336   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:16.529340   92925 cri.go:89] found id: ""
	I1213 19:11:16.529349   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:16.529404   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.533136   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.536814   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:16.536887   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:16.563026   92925 cri.go:89] found id: ""
	I1213 19:11:16.563064   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.563073   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:16.563079   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:16.563139   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:16.594519   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:16.594541   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:16.594546   92925 cri.go:89] found id: ""
	I1213 19:11:16.594554   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:16.594611   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.598288   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.601875   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:16.601946   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:16.628577   92925 cri.go:89] found id: ""
	I1213 19:11:16.628603   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.628612   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:16.628618   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:16.628676   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:16.656978   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:16.657001   92925 cri.go:89] found id: ""
	I1213 19:11:16.657039   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:16.657095   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.661124   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:16.661236   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:16.695697   92925 cri.go:89] found id: ""
	I1213 19:11:16.695731   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.695739   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:16.695748   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:16.695760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:16.766672   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:16.757776    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.758599    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760229    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760563    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.762386    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:16.757776    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.758599    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760229    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760563    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.762386    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:16.766696   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:16.766709   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:16.808187   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:16.808237   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:16.850027   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:16.850062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:16.906135   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:16.906174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:16.935630   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:16.935661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:16.963433   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:16.963463   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:17.045818   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:17.045852   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:17.079053   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:17.079080   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:17.186217   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:17.186251   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:17.198725   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:17.198760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:19.727394   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:19.738364   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:19.738431   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:19.768160   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:19.768183   92925 cri.go:89] found id: ""
	I1213 19:11:19.768196   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:19.768252   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.772004   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:19.772128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:19.799342   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:19.799368   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:19.799374   92925 cri.go:89] found id: ""
	I1213 19:11:19.799382   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:19.799466   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.803455   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.807247   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:19.807340   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:19.835979   92925 cri.go:89] found id: ""
	I1213 19:11:19.836005   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.836014   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:19.836021   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:19.836081   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:19.864302   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:19.864325   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:19.864331   92925 cri.go:89] found id: ""
	I1213 19:11:19.864338   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:19.864397   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.868104   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.871725   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:19.871812   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:19.899890   92925 cri.go:89] found id: ""
	I1213 19:11:19.899919   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.899937   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:19.899944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:19.900012   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:19.927600   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:19.927624   92925 cri.go:89] found id: ""
	I1213 19:11:19.927632   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:19.927685   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.931424   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:19.931509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:19.961424   92925 cri.go:89] found id: ""
	I1213 19:11:19.961454   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.961469   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:19.961479   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:19.961492   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:20.002155   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:20.002284   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:20.082123   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:20.071968    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.072791    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.075159    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.076013    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.077851    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:20.071968    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.072791    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.075159    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.076013    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.077851    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:20.082148   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:20.082162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:20.127578   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:20.127614   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:20.174673   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:20.174713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:20.204713   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:20.204791   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:20.282989   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:20.283026   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:20.327361   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:20.327436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:20.427993   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:20.428032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:20.442295   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:20.442326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:20.471477   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:20.471510   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.025659   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:23.036724   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:23.036796   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:23.064245   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:23.064269   92925 cri.go:89] found id: ""
	I1213 19:11:23.064281   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:23.064341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.068194   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:23.068269   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:23.097592   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:23.097616   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:23.097622   92925 cri.go:89] found id: ""
	I1213 19:11:23.097629   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:23.097692   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.104525   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.110378   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:23.110459   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:23.144932   92925 cri.go:89] found id: ""
	I1213 19:11:23.144958   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.144966   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:23.144972   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:23.145063   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:23.177104   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.177129   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:23.177134   92925 cri.go:89] found id: ""
	I1213 19:11:23.177142   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:23.177197   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.181178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.185904   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:23.185988   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:23.213662   92925 cri.go:89] found id: ""
	I1213 19:11:23.213740   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.213765   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:23.213784   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:23.213891   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:23.244233   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:23.244298   92925 cri.go:89] found id: ""
	I1213 19:11:23.244322   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:23.244413   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.248148   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:23.248228   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:23.276740   92925 cri.go:89] found id: ""
	I1213 19:11:23.276765   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.276773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:23.276784   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:23.276796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.336420   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:23.336453   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:23.368543   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:23.368572   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:23.450730   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:23.450772   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:23.483510   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:23.483550   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:23.628675   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:23.619033    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.620672    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.621438    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623126    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623775    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:23.619033    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.620672    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.621438    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623126    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623775    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:23.628699   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:23.628713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:23.665846   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:23.665882   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:23.713922   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:23.713959   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:23.752354   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:23.752384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:23.858109   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:23.858150   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:23.871373   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:23.871404   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.419535   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:26.430634   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:26.430705   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:26.458628   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:26.458650   92925 cri.go:89] found id: ""
	I1213 19:11:26.458661   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:26.458716   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.462422   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:26.462495   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:26.490349   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.490389   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:26.490394   92925 cri.go:89] found id: ""
	I1213 19:11:26.490401   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:26.490468   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.494405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.498636   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:26.498716   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:26.528607   92925 cri.go:89] found id: ""
	I1213 19:11:26.528637   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.528646   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:26.528653   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:26.528722   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:26.558710   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:26.558733   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:26.558741   92925 cri.go:89] found id: ""
	I1213 19:11:26.558748   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:26.558825   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.562803   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.566707   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:26.566808   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:26.596729   92925 cri.go:89] found id: ""
	I1213 19:11:26.596754   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.596763   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:26.596769   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:26.596826   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:26.624054   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:26.624077   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:26.624083   92925 cri.go:89] found id: ""
	I1213 19:11:26.624090   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:26.624167   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.628449   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.632716   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:26.632822   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:26.659170   92925 cri.go:89] found id: ""
	I1213 19:11:26.659195   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.659204   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:26.659213   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:26.659226   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:26.694272   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:26.694300   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:26.720924   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:26.720959   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:26.751980   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:26.752009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:26.824509   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:26.824547   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:26.855705   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:26.855733   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:26.867403   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:26.867431   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.906787   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:26.906823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:26.951319   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:26.951351   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:27.006541   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:27.006579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:27.033554   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:27.033583   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:27.135230   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:27.135266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:27.210106   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:27.201700    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.202413    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.203893    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.204311    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.205969    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:27.201700    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.202413    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.203893    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.204311    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.205969    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:29.711829   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:29.723531   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:29.723601   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:29.753961   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:29.753984   92925 cri.go:89] found id: ""
	I1213 19:11:29.753992   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:29.754050   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.757806   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:29.757873   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:29.783149   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:29.783181   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:29.783186   92925 cri.go:89] found id: ""
	I1213 19:11:29.783194   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:29.783263   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.787082   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.790979   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:29.791109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:29.817959   92925 cri.go:89] found id: ""
	I1213 19:11:29.817985   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.817994   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:29.818000   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:29.818060   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:29.846235   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:29.846257   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:29.846262   92925 cri.go:89] found id: ""
	I1213 19:11:29.846270   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:29.846351   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.849953   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.853572   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:29.853692   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:29.879800   92925 cri.go:89] found id: ""
	I1213 19:11:29.879834   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.879843   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:29.879850   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:29.879915   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:29.907082   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:29.907116   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:29.907121   92925 cri.go:89] found id: ""
	I1213 19:11:29.907128   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:29.907192   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.910914   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.914566   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:29.914651   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:29.939124   92925 cri.go:89] found id: ""
	I1213 19:11:29.939149   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.939158   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:29.939168   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:29.939205   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:29.981605   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:29.981639   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:30.089079   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:30.089116   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:30.156090   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:30.156124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:30.186549   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:30.186580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:30.214921   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:30.214950   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:30.242668   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:30.242697   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:30.319413   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:30.319445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:30.419178   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:30.419215   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:30.431724   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:30.431753   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:30.501053   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:30.492849    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.493577    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495362    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495976    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.497562    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:30.492849    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.493577    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495362    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495976    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.497562    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:30.501078   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:30.501092   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:30.532550   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:30.532577   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:33.076374   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:33.087831   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:33.087899   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:33.126218   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:33.126241   92925 cri.go:89] found id: ""
	I1213 19:11:33.126251   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:33.126315   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.130647   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:33.130731   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:33.158982   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:33.159013   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:33.159020   92925 cri.go:89] found id: ""
	I1213 19:11:33.159028   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:33.159094   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.162984   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.166562   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:33.166635   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:33.193330   92925 cri.go:89] found id: ""
	I1213 19:11:33.193353   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.193361   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:33.193367   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:33.193423   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:33.221129   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:33.221153   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:33.221159   92925 cri.go:89] found id: ""
	I1213 19:11:33.221166   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:33.221239   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.225797   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.229503   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:33.229615   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:33.257761   92925 cri.go:89] found id: ""
	I1213 19:11:33.257786   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.257795   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:33.257802   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:33.257865   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:33.285915   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:33.285941   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:33.285957   92925 cri.go:89] found id: ""
	I1213 19:11:33.285968   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:33.286026   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.289819   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.293581   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:33.293655   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:33.324324   92925 cri.go:89] found id: ""
	I1213 19:11:33.324348   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.324357   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:33.324366   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:33.324377   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:33.350842   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:33.350913   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:33.424344   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:33.424380   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:33.452897   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:33.452930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:33.504468   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:33.504506   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:33.579150   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:33.579183   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:33.607049   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:33.607076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:33.633297   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:33.633326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:33.668670   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:33.668699   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:33.766904   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:33.766936   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:33.780538   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:33.780567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:33.857253   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:33.848822    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.849778    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851312    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851759    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.853392    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:33.848822    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.849778    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851312    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851759    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.853392    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:33.857275   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:33.857290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.398970   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:36.410341   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:36.410416   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:36.438456   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:36.438479   92925 cri.go:89] found id: ""
	I1213 19:11:36.438488   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:36.438568   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.442320   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:36.442395   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:36.470092   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.470116   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:36.470121   92925 cri.go:89] found id: ""
	I1213 19:11:36.470131   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:36.470218   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.474021   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.477467   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:36.477578   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:36.505647   92925 cri.go:89] found id: ""
	I1213 19:11:36.505670   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.505714   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:36.505733   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:36.505804   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:36.537872   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:36.537895   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:36.537900   92925 cri.go:89] found id: ""
	I1213 19:11:36.537907   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:36.537961   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.541660   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.545244   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:36.545314   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:36.570195   92925 cri.go:89] found id: ""
	I1213 19:11:36.570228   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.570238   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:36.570250   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:36.570339   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:36.595894   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:36.595958   92925 cri.go:89] found id: ""
	I1213 19:11:36.595979   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:36.596064   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.599675   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:36.599789   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:36.624988   92925 cri.go:89] found id: ""
	I1213 19:11:36.625083   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.625101   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:36.625112   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:36.625123   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:36.718891   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:36.718924   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:36.786494   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:36.778476    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.779141    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.780744    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.781242    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.782695    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:36.778476    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.779141    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.780744    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.781242    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.782695    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:36.786519   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:36.786531   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.828295   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:36.828328   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:36.871560   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:36.871591   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:36.941295   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:36.941335   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:37.023869   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:37.023902   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:37.055672   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:37.055700   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:37.069301   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:37.069334   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:37.098989   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:37.099015   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:37.135738   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:37.135771   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:39.664114   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:39.675928   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:39.675999   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:39.702971   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:39.702989   92925 cri.go:89] found id: ""
	I1213 19:11:39.702998   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:39.703053   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.707021   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:39.707096   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:39.733615   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:39.733637   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:39.733642   92925 cri.go:89] found id: ""
	I1213 19:11:39.733663   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:39.733720   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.737520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.740992   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:39.741107   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:39.769090   92925 cri.go:89] found id: ""
	I1213 19:11:39.769174   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.769194   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:39.769201   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:39.769351   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:39.804293   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:39.804314   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:39.804319   92925 cri.go:89] found id: ""
	I1213 19:11:39.804326   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:39.804389   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.808495   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.812181   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:39.812255   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:39.838217   92925 cri.go:89] found id: ""
	I1213 19:11:39.838243   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.838252   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:39.838259   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:39.838314   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:39.866484   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:39.866504   92925 cri.go:89] found id: ""
	I1213 19:11:39.866512   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:39.866567   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.870814   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:39.870885   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:39.908207   92925 cri.go:89] found id: ""
	I1213 19:11:39.908233   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.908243   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:39.908252   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:39.908264   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:39.920472   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:39.920499   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:39.948910   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:39.948951   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:40.012782   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:40.012825   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:40.047267   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:40.047297   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:40.129790   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:40.129871   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:40.168487   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:40.168519   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:40.269381   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:40.269456   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:40.338885   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:40.330165    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.330955    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333137    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333832    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.335154    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:40.330165    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.330955    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333137    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333832    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.335154    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:40.338906   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:40.338919   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:40.394986   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:40.395024   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:40.460751   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:40.460799   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:42.992519   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:43.004031   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:43.004110   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:43.032556   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:43.032578   92925 cri.go:89] found id: ""
	I1213 19:11:43.032586   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:43.032640   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.036332   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:43.036401   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:43.065252   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:43.065282   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:43.065288   92925 cri.go:89] found id: ""
	I1213 19:11:43.065296   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:43.065358   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.070007   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.074047   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:43.074122   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:43.108141   92925 cri.go:89] found id: ""
	I1213 19:11:43.108169   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.108181   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:43.108188   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:43.108248   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:43.139539   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:43.139560   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:43.139566   92925 cri.go:89] found id: ""
	I1213 19:11:43.139574   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:43.139629   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.143534   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.147218   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:43.147292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:43.175751   92925 cri.go:89] found id: ""
	I1213 19:11:43.175825   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.175849   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:43.175868   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:43.175952   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:43.200994   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:43.201062   92925 cri.go:89] found id: ""
	I1213 19:11:43.201072   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:43.201127   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.204988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:43.205128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:43.231895   92925 cri.go:89] found id: ""
	I1213 19:11:43.231922   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.231946   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:43.231955   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:43.231968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:43.272192   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:43.272228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:43.334615   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:43.334650   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:43.366125   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:43.366153   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:43.397225   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:43.397254   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:43.468828   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:43.460439    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.461076    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.462731    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.463290    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.464964    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:43.460439    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.461076    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.462731    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.463290    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.464964    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:43.468856   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:43.468869   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:43.519337   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:43.519376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:43.552934   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:43.552963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:43.636492   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:43.636526   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:43.735496   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:43.735529   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:43.748666   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:43.748693   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:46.276009   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:46.287459   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:46.287539   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:46.315787   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:46.315809   92925 cri.go:89] found id: ""
	I1213 19:11:46.315817   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:46.315881   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.319776   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:46.319870   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:46.349638   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:46.349701   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:46.349721   92925 cri.go:89] found id: ""
	I1213 19:11:46.349737   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:46.349810   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.353770   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.357319   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:46.357391   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:46.387852   92925 cri.go:89] found id: ""
	I1213 19:11:46.387879   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.387888   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:46.387895   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:46.387956   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:46.415327   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:46.415351   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:46.415362   92925 cri.go:89] found id: ""
	I1213 19:11:46.415369   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:46.415425   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.420351   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.423877   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:46.423945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:46.452445   92925 cri.go:89] found id: ""
	I1213 19:11:46.452471   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.452480   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:46.452487   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:46.452543   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:46.488306   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:46.488329   92925 cri.go:89] found id: ""
	I1213 19:11:46.488337   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:46.488393   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.492372   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:46.492477   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:46.531601   92925 cri.go:89] found id: ""
	I1213 19:11:46.531625   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.531635   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:46.531644   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:46.531656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:46.576619   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:46.576653   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:46.637968   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:46.638005   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:46.666074   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:46.666103   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:46.699911   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:46.699988   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:46.741837   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:46.741889   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:46.771703   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:46.771729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:46.848202   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:46.848240   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:46.949628   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:46.949664   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:46.963040   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:46.963071   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:47.045784   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:47.037108    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.038507    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.039621    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.040561    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.042097    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:47.037108    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.038507    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.039621    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.040561    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.042097    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:47.045805   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:47.045818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.573745   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:49.584944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:49.585049   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:49.612421   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.612440   92925 cri.go:89] found id: ""
	I1213 19:11:49.612448   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:49.612503   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.616771   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:49.616842   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:49.644250   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:49.644313   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:49.644342   92925 cri.go:89] found id: ""
	I1213 19:11:49.644365   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:49.644448   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.648357   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.652087   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:49.652211   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:49.678765   92925 cri.go:89] found id: ""
	I1213 19:11:49.678790   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.678798   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:49.678804   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:49.678882   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:49.707013   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:49.707082   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:49.707102   92925 cri.go:89] found id: ""
	I1213 19:11:49.707128   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:49.707219   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.711513   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.715226   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:49.715321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:49.741306   92925 cri.go:89] found id: ""
	I1213 19:11:49.741375   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.741401   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:49.741421   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:49.741505   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:49.768427   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:49.768451   92925 cri.go:89] found id: ""
	I1213 19:11:49.768459   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:49.768517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.772356   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:49.772478   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:49.801564   92925 cri.go:89] found id: ""
	I1213 19:11:49.801633   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.801659   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:49.801687   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:49.801725   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.827233   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:49.827261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:49.884809   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:49.884846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:49.911980   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:49.912011   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:49.938143   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:49.938174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:49.951851   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:49.951880   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:49.992816   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:49.992861   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:50.064112   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:50.064149   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:50.149808   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:50.149847   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:50.182876   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:50.182907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:50.285831   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:50.285868   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:50.357682   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:50.350098    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.350586    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.351793    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.352420    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.354169    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:50.350098    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.350586    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.351793    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.352420    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.354169    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:52.858319   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:52.869473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:52.869548   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:52.897144   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:52.897169   92925 cri.go:89] found id: ""
	I1213 19:11:52.897177   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:52.897234   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.900973   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:52.901074   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:52.928815   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:52.928842   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:52.928847   92925 cri.go:89] found id: ""
	I1213 19:11:52.928855   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:52.928912   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.932785   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.936853   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:52.936928   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:52.963913   92925 cri.go:89] found id: ""
	I1213 19:11:52.963940   92925 logs.go:282] 0 containers: []
	W1213 19:11:52.963949   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:52.963954   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:52.964018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:52.993621   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:52.993685   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:52.993705   92925 cri.go:89] found id: ""
	I1213 19:11:52.993730   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:52.993820   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.997612   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:53.001214   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:53.001293   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:53.032707   92925 cri.go:89] found id: ""
	I1213 19:11:53.032733   92925 logs.go:282] 0 containers: []
	W1213 19:11:53.032742   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:53.032749   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:53.032812   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:53.059757   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:53.059780   92925 cri.go:89] found id: ""
	I1213 19:11:53.059805   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:53.059860   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:53.063600   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:53.063673   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:53.091179   92925 cri.go:89] found id: ""
	I1213 19:11:53.091248   92925 logs.go:282] 0 containers: []
	W1213 19:11:53.091286   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:53.091303   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:53.091316   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:53.123301   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:53.123391   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:53.196598   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:53.196634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:53.227689   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:53.227715   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:53.327870   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:53.327905   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:53.343261   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:53.343290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:53.371058   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:53.371089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:53.418862   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:53.418896   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:53.475787   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:53.475822   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:53.507061   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:53.507090   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:53.584040   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:53.575651    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.576367    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.577874    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.578518    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.580190    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:53.575651    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.576367    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.577874    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.578518    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.580190    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:53.584063   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:53.584076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.124239   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:56.136746   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:56.136818   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:56.165417   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:56.165442   92925 cri.go:89] found id: ""
	I1213 19:11:56.165451   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:56.165513   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.169272   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:56.169348   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:56.198281   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.198304   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:56.198309   92925 cri.go:89] found id: ""
	I1213 19:11:56.198316   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:56.198370   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.202310   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.206597   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:56.206670   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:56.233152   92925 cri.go:89] found id: ""
	I1213 19:11:56.233179   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.233189   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:56.233195   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:56.233259   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:56.263980   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:56.264000   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:56.264005   92925 cri.go:89] found id: ""
	I1213 19:11:56.264013   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:56.264071   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.268409   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.272169   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:56.272245   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:56.307136   92925 cri.go:89] found id: ""
	I1213 19:11:56.307163   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.307173   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:56.307179   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:56.307237   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:56.335595   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:56.335618   92925 cri.go:89] found id: ""
	I1213 19:11:56.335626   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:56.335684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.339317   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:56.339388   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:56.365740   92925 cri.go:89] found id: ""
	I1213 19:11:56.365763   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.365773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:56.365782   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:56.365795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:56.392684   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:56.392715   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.443884   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:56.443916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:56.470931   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:56.471007   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:56.498493   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:56.498569   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:56.594275   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:56.594325   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:56.697865   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:56.697902   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:56.710803   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:56.710833   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:56.774588   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:56.766250    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.767127    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.768759    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.769116    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.770766    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:56.766250    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.767127    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.768759    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.769116    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.770766    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:56.774608   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:56.774621   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:56.822318   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:56.822354   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:56.879404   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:56.879440   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:59.418085   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:59.429523   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:59.429599   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:59.459140   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:59.459164   92925 cri.go:89] found id: ""
	I1213 19:11:59.459173   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:59.459250   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.463131   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:59.463231   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:59.491515   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:59.491539   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:59.491544   92925 cri.go:89] found id: ""
	I1213 19:11:59.491552   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:59.491650   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.495555   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.499043   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:59.499118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:59.542670   92925 cri.go:89] found id: ""
	I1213 19:11:59.542745   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.542771   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:59.542785   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:59.542861   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:59.569926   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:59.569950   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:59.569954   92925 cri.go:89] found id: ""
	I1213 19:11:59.569962   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:59.570030   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.574242   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.578071   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:59.578177   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:59.610686   92925 cri.go:89] found id: ""
	I1213 19:11:59.610714   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.610723   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:59.610729   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:59.610789   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:59.639587   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:59.639641   92925 cri.go:89] found id: ""
	I1213 19:11:59.639659   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:59.639720   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.644316   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:59.644404   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:59.672619   92925 cri.go:89] found id: ""
	I1213 19:11:59.672644   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.672653   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:59.672663   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:59.672684   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:59.700144   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:59.700172   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:59.777808   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:59.777856   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:59.811078   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:59.811111   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:59.910789   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:59.910827   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:59.987053   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:59.975650    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.976469    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.977682    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.978310    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.979849    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:59.975650    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.976469    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.977682    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.978310    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.979849    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:00.003642   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:00.003687   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:00.194711   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:00.194803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:00.357297   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:00.357336   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:00.438487   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:00.438580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:00.454845   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:00.454880   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:00.564592   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:00.564633   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.112543   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:03.123663   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:03.123738   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:03.157514   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:03.157538   92925 cri.go:89] found id: ""
	I1213 19:12:03.157546   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:03.157601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.161756   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:03.161829   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:03.187867   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:03.187887   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:03.187892   92925 cri.go:89] found id: ""
	I1213 19:12:03.187900   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:03.187954   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.191586   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.195089   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:03.195186   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:03.227702   92925 cri.go:89] found id: ""
	I1213 19:12:03.227727   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.227736   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:03.227742   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:03.227802   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:03.254539   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:03.254561   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.254566   92925 cri.go:89] found id: ""
	I1213 19:12:03.254574   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:03.254653   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.258434   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.262232   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:03.262309   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:03.293528   92925 cri.go:89] found id: ""
	I1213 19:12:03.293552   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.293561   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:03.293567   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:03.293627   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:03.324573   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:03.324595   92925 cri.go:89] found id: ""
	I1213 19:12:03.324603   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:03.324655   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.328400   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:03.328469   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:03.354317   92925 cri.go:89] found id: ""
	I1213 19:12:03.354342   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.354351   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:03.354362   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:03.354376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:03.416520   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:03.416559   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.443937   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:03.443966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:03.520631   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:03.520669   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:03.539545   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:03.539575   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:03.609658   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:03.599495    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.600262    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.602170    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604093    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604836    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:03.599495    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.600262    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.602170    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604093    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604836    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:03.609679   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:03.609691   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:03.641994   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:03.642021   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:03.683262   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:03.683296   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:03.711455   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:03.711486   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:03.742963   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:03.742994   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:03.842936   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:03.842971   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.387950   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:06.398757   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:06.398838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:06.427281   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:06.427343   92925 cri.go:89] found id: ""
	I1213 19:12:06.427359   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:06.427424   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.431296   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:06.431370   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:06.458047   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:06.458069   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.458073   92925 cri.go:89] found id: ""
	I1213 19:12:06.458081   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:06.458138   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.461822   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.466010   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:06.466084   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:06.504515   92925 cri.go:89] found id: ""
	I1213 19:12:06.504542   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.504551   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:06.504560   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:06.504621   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:06.541478   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:06.541501   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:06.541506   92925 cri.go:89] found id: ""
	I1213 19:12:06.541514   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:06.541576   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.545645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.549634   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:06.549704   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:06.576630   92925 cri.go:89] found id: ""
	I1213 19:12:06.576698   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.576724   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:06.576744   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:06.576832   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:06.604207   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:06.604229   92925 cri.go:89] found id: ""
	I1213 19:12:06.604237   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:06.604298   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.608117   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:06.608232   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:06.634291   92925 cri.go:89] found id: ""
	I1213 19:12:06.634362   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.634379   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:06.634388   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:06.634402   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.696997   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:06.697085   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:06.756705   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:06.756741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:06.836493   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:06.836525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:06.936663   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:06.936700   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:06.949180   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:06.949212   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:07.020703   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:07.012352    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.013247    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.014825    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.015260    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.016747    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:07.012352    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.013247    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.014825    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.015260    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.016747    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:07.020728   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:07.020741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:07.052354   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:07.052383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:07.079834   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:07.079865   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:07.119690   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:07.119720   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:07.146357   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:07.146385   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:09.686883   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:09.697849   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:09.697924   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:09.724282   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:09.724307   92925 cri.go:89] found id: ""
	I1213 19:12:09.724316   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:09.724374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.727853   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:09.727929   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:09.757294   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:09.757315   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:09.757320   92925 cri.go:89] found id: ""
	I1213 19:12:09.757328   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:09.757383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.761291   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.764680   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:09.764755   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:09.791939   92925 cri.go:89] found id: ""
	I1213 19:12:09.791964   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.791974   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:09.791979   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:09.792059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:09.819349   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:09.819415   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:09.819435   92925 cri.go:89] found id: ""
	I1213 19:12:09.819460   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:09.819540   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.823580   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.827023   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:09.827138   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:09.857888   92925 cri.go:89] found id: ""
	I1213 19:12:09.857966   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.857990   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:09.858001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:09.858066   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:09.884350   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:09.884373   92925 cri.go:89] found id: ""
	I1213 19:12:09.884381   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:09.884438   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.888641   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:09.888720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:09.915592   92925 cri.go:89] found id: ""
	I1213 19:12:09.915614   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.915623   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:09.915632   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:09.915644   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:09.941582   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:09.941614   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:10.002342   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:10.002377   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:10.031301   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:10.031336   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:10.071296   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:10.071332   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:10.123567   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:10.123605   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:10.157428   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:10.157457   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:10.238347   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:10.238426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:10.334563   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:10.334598   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:10.347255   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:10.347286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:10.432160   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:10.423156    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.423973    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.425617    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.426254    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.428070    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:10.423156    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.423973    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.425617    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.426254    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.428070    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:10.432226   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:10.432252   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:12.994728   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:13.005943   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:13.006017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:13.033581   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:13.033602   92925 cri.go:89] found id: ""
	I1213 19:12:13.033610   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:13.033689   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.037439   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:13.037531   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:13.069482   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:13.069506   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:13.069511   92925 cri.go:89] found id: ""
	I1213 19:12:13.069520   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:13.069579   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.073384   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.077179   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:13.077250   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:13.117434   92925 cri.go:89] found id: ""
	I1213 19:12:13.117508   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.117525   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:13.117532   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:13.117603   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:13.151113   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:13.151191   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:13.151211   92925 cri.go:89] found id: ""
	I1213 19:12:13.151235   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:13.151330   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.155305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.159267   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:13.159375   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:13.193156   92925 cri.go:89] found id: ""
	I1213 19:12:13.193183   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.193191   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:13.193197   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:13.193303   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:13.228192   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:13.228272   92925 cri.go:89] found id: ""
	I1213 19:12:13.228304   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:13.228385   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.232149   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:13.232270   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:13.265793   92925 cri.go:89] found id: ""
	I1213 19:12:13.265868   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.265892   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:13.265914   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:13.265974   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:13.298247   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:13.298332   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:13.338944   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:13.338977   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:13.398561   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:13.398600   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:13.426862   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:13.426891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:13.526771   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:13.526807   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:13.539556   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:13.539587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:13.606738   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:13.598805    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.599569    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.600660    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.601348    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.602977    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:13.598805    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.599569    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.600660    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.601348    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.602977    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:13.606761   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:13.606777   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:13.632299   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:13.632367   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:13.681186   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:13.681224   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:13.715711   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:13.715741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:16.289974   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:16.301720   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:16.301794   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:16.333180   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:16.333203   92925 cri.go:89] found id: ""
	I1213 19:12:16.333211   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:16.333262   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.337163   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:16.337233   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:16.366808   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:16.366829   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:16.366834   92925 cri.go:89] found id: ""
	I1213 19:12:16.366841   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:16.366897   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.370643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.374381   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:16.374453   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:16.402639   92925 cri.go:89] found id: ""
	I1213 19:12:16.402663   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.402672   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:16.402678   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:16.402735   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:16.429862   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:16.429927   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:16.429948   92925 cri.go:89] found id: ""
	I1213 19:12:16.429971   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:16.430057   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.437586   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.443620   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:16.443739   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:16.468889   92925 cri.go:89] found id: ""
	I1213 19:12:16.468915   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.468933   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:16.468940   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:16.469002   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:16.497884   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:16.497952   92925 cri.go:89] found id: ""
	I1213 19:12:16.497975   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:16.498065   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.501907   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:16.502017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:16.528833   92925 cri.go:89] found id: ""
	I1213 19:12:16.528861   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.528871   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:16.528880   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:16.528891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:16.571970   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:16.572003   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:16.599399   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:16.599433   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:16.626668   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:16.626698   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:16.657476   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:16.657505   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:16.756171   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:16.756207   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:16.768558   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:16.768587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:16.841002   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:16.841041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:16.913877   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:16.913951   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:17.002296   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:16.981549    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.983800    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.984559    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.987461    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.988234    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:16.981549    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.983800    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.984559    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.987461    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.988234    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:17.002364   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:17.002385   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:17.029940   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:17.029968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.576739   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:19.587975   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:19.588041   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:19.614817   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:19.614840   92925 cri.go:89] found id: ""
	I1213 19:12:19.614848   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:19.614903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.618582   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:19.618679   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:19.651398   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.651419   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:19.651424   92925 cri.go:89] found id: ""
	I1213 19:12:19.651432   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:19.651501   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.655392   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.659059   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:19.659134   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:19.684221   92925 cri.go:89] found id: ""
	I1213 19:12:19.684247   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.684257   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:19.684264   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:19.684323   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:19.711198   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:19.711220   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:19.711226   92925 cri.go:89] found id: ""
	I1213 19:12:19.711233   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:19.711289   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.715680   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.719221   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:19.719292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:19.751237   92925 cri.go:89] found id: ""
	I1213 19:12:19.751286   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.751296   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:19.751303   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:19.751371   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:19.778300   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:19.778321   92925 cri.go:89] found id: ""
	I1213 19:12:19.778330   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:19.778413   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.782520   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:19.782614   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:19.814477   92925 cri.go:89] found id: ""
	I1213 19:12:19.814507   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.814517   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:19.814526   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:19.814558   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.855891   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:19.855922   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:19.917648   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:19.917687   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:19.949548   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:19.949574   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:19.976644   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:19.976680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:20.064988   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:20.065042   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:20.114742   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:20.114776   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:20.220028   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:20.220066   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:20.232673   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:20.232703   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:20.314099   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:20.305597    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.306343    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308133    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308739    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.310382    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:20.305597    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.306343    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308133    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308739    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.310382    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:20.314125   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:20.314142   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:20.358618   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:20.358649   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:22.884692   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:22.896642   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:22.896714   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:22.925894   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:22.925919   92925 cri.go:89] found id: ""
	I1213 19:12:22.925928   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:22.925982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.929556   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:22.929630   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:22.957310   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:22.957375   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:22.957393   92925 cri.go:89] found id: ""
	I1213 19:12:22.957419   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:22.957496   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.961230   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.964927   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:22.965122   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:22.993901   92925 cri.go:89] found id: ""
	I1213 19:12:22.993974   92925 logs.go:282] 0 containers: []
	W1213 19:12:22.994000   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:22.994012   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:22.994092   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:23.021087   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:23.021112   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:23.021117   92925 cri.go:89] found id: ""
	I1213 19:12:23.021123   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:23.021179   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.025414   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.029044   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:23.029147   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:23.054815   92925 cri.go:89] found id: ""
	I1213 19:12:23.054840   92925 logs.go:282] 0 containers: []
	W1213 19:12:23.054848   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:23.054855   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:23.054913   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:23.080286   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:23.080312   92925 cri.go:89] found id: ""
	I1213 19:12:23.080320   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:23.080407   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.084274   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:23.084375   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:23.115727   92925 cri.go:89] found id: ""
	I1213 19:12:23.115750   92925 logs.go:282] 0 containers: []
	W1213 19:12:23.115758   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:23.115767   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:23.115796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:23.194830   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:23.186405    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.187281    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.188756    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.189379    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.191250    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:23.186405    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.187281    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.188756    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.189379    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.191250    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:23.194890   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:23.194911   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:23.234766   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:23.234801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:23.282930   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:23.282966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:23.352028   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:23.352067   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:23.379340   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:23.379418   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:23.425558   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:23.425589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:23.453170   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:23.453198   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:23.484993   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:23.485089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:23.575060   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:23.575093   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:23.676623   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:23.676658   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:26.191200   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:26.202087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:26.202208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:26.237575   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:26.237607   92925 cri.go:89] found id: ""
	I1213 19:12:26.237616   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:26.237685   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.242604   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:26.242726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:26.275657   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:26.275680   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:26.275687   92925 cri.go:89] found id: ""
	I1213 19:12:26.275696   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:26.275774   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.279747   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.283677   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:26.283784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:26.312109   92925 cri.go:89] found id: ""
	I1213 19:12:26.312185   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.312219   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:26.312239   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:26.312329   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:26.342409   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:26.342432   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:26.342437   92925 cri.go:89] found id: ""
	I1213 19:12:26.342445   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:26.342500   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.346485   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.350281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:26.350365   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:26.375751   92925 cri.go:89] found id: ""
	I1213 19:12:26.375775   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.375783   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:26.375790   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:26.375864   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:26.401584   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:26.401607   92925 cri.go:89] found id: ""
	I1213 19:12:26.401614   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:26.401686   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.405294   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:26.405373   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:26.433390   92925 cri.go:89] found id: ""
	I1213 19:12:26.433467   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.433491   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:26.433507   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:26.433533   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:26.493265   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:26.493305   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:26.528279   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:26.528307   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:26.612530   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:26.612565   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:26.625201   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:26.625231   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:26.695921   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:26.686948    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.687827    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.689491    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.690111    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.691852    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:26.686948    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.687827    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.689491    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.690111    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.691852    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:26.695942   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:26.695955   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:26.721367   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:26.721436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:26.747790   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:26.747818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:26.778783   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:26.778813   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:26.875307   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:26.875341   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:26.926065   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:26.926104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.471412   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:29.482208   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:29.482279   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:29.518089   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:29.518111   92925 cri.go:89] found id: ""
	I1213 19:12:29.518120   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:29.518179   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.522151   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:29.522316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:29.550522   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:29.550548   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.550553   92925 cri.go:89] found id: ""
	I1213 19:12:29.550561   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:29.550614   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.554476   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.557855   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:29.557927   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:29.585314   92925 cri.go:89] found id: ""
	I1213 19:12:29.585337   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.585346   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:29.585352   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:29.585415   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:29.613061   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:29.613081   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:29.613087   92925 cri.go:89] found id: ""
	I1213 19:12:29.613094   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:29.613149   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.617383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.621127   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:29.621198   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:29.648388   92925 cri.go:89] found id: ""
	I1213 19:12:29.648415   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.648425   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:29.648434   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:29.648493   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:29.675800   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:29.675823   92925 cri.go:89] found id: ""
	I1213 19:12:29.675832   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:29.675885   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.679891   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:29.679964   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:29.708415   92925 cri.go:89] found id: ""
	I1213 19:12:29.708439   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.708447   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:29.708457   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:29.708469   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:29.747281   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:29.747357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.791340   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:29.791374   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:29.834406   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:29.834436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:29.861132   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:29.861162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:29.962754   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:29.962831   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:29.975698   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:29.975725   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:30.136167   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:30.136206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:30.219391   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:30.219426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:30.250060   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:30.250090   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:30.324085   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:30.315913    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.316779    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318083    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318787    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.320486    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:30.315913    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.316779    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318083    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318787    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.320486    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:30.324108   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:30.324122   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:32.849129   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:32.861076   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:32.861146   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:32.890816   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:32.890837   92925 cri.go:89] found id: ""
	I1213 19:12:32.890845   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:32.890899   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.894607   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:32.894684   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:32.925830   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:32.925856   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:32.925861   92925 cri.go:89] found id: ""
	I1213 19:12:32.925868   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:32.925921   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.929582   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.932913   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:32.932983   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:32.959171   92925 cri.go:89] found id: ""
	I1213 19:12:32.959199   92925 logs.go:282] 0 containers: []
	W1213 19:12:32.959208   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:32.959214   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:32.959319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:32.993282   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:32.993309   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:32.993315   92925 cri.go:89] found id: ""
	I1213 19:12:32.993331   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:32.993393   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.997923   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:33.002009   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:33.002111   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:33.029187   92925 cri.go:89] found id: ""
	I1213 19:12:33.029210   92925 logs.go:282] 0 containers: []
	W1213 19:12:33.029219   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:33.029225   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:33.029333   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:33.057252   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:33.057287   92925 cri.go:89] found id: ""
	I1213 19:12:33.057296   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:33.057360   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:33.061234   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:33.061340   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:33.089861   92925 cri.go:89] found id: ""
	I1213 19:12:33.089889   92925 logs.go:282] 0 containers: []
	W1213 19:12:33.089898   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:33.089907   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:33.089919   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:33.108679   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:33.108710   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:33.162722   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:33.162768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:33.227823   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:33.227861   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:33.260183   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:33.260210   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:33.286847   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:33.286872   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:33.368228   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:33.368263   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:33.475747   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:33.475786   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:33.554192   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:33.546124    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.546992    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.548557    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.549128    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.550628    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:33.546124    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.546992    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.548557    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.549128    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.550628    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:33.554212   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:33.554225   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:33.579823   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:33.579850   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:33.623777   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:33.623815   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:36.157314   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:36.168502   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:36.168576   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:36.196421   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:36.196442   92925 cri.go:89] found id: ""
	I1213 19:12:36.196451   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:36.196511   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.200568   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:36.200636   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:36.227300   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:36.227324   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:36.227331   92925 cri.go:89] found id: ""
	I1213 19:12:36.227338   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:36.227396   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.231459   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.235239   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:36.235316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:36.268611   92925 cri.go:89] found id: ""
	I1213 19:12:36.268635   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.268644   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:36.268650   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:36.268731   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:36.308479   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:36.308576   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:36.308597   92925 cri.go:89] found id: ""
	I1213 19:12:36.308642   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:36.308738   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.312547   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.316077   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:36.316189   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:36.342346   92925 cri.go:89] found id: ""
	I1213 19:12:36.342382   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.342392   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:36.342414   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:36.342496   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:36.368808   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:36.368834   92925 cri.go:89] found id: ""
	I1213 19:12:36.368844   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:36.368899   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.372705   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:36.372790   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:36.399760   92925 cri.go:89] found id: ""
	I1213 19:12:36.399796   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.399805   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:36.399817   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:36.399829   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:36.497016   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:36.497097   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:36.511432   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:36.511552   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:36.587222   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:36.577960    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.578711    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.580805    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.581572    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.583427    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:36.577960    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.578711    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.580805    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.581572    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.583427    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:36.587247   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:36.587262   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:36.630739   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:36.630774   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:36.683440   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:36.683473   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:36.751190   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:36.751241   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:36.779744   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:36.779833   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:36.806180   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:36.806206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:36.832449   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:36.832475   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:36.910859   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:36.910900   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:39.441151   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:39.452365   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:39.452439   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:39.484411   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:39.484436   92925 cri.go:89] found id: ""
	I1213 19:12:39.484444   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:39.484499   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.488316   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:39.488390   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:39.519236   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:39.519263   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:39.519268   92925 cri.go:89] found id: ""
	I1213 19:12:39.519277   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:39.519331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.523340   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.529308   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:39.529377   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:39.559339   92925 cri.go:89] found id: ""
	I1213 19:12:39.559405   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.559437   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:39.559456   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:39.559543   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:39.589737   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:39.589769   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:39.589775   92925 cri.go:89] found id: ""
	I1213 19:12:39.589783   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:39.589848   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.593976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.598330   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:39.598421   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:39.631670   92925 cri.go:89] found id: ""
	I1213 19:12:39.631699   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.631708   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:39.631714   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:39.631783   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:39.662738   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:39.662803   92925 cri.go:89] found id: ""
	I1213 19:12:39.662824   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:39.662906   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.666773   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:39.666867   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:39.695600   92925 cri.go:89] found id: ""
	I1213 19:12:39.695627   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.695637   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:39.695646   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:39.695658   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:39.787866   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:39.787904   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:39.864556   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:39.853140    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.856488    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.857226    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.858708    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.859314    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:39.853140    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.856488    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.857226    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.858708    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.859314    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:39.864580   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:39.864594   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:39.893552   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:39.893593   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:39.935040   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:39.935070   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:39.977962   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:39.977992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:40.052674   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:40.052713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:40.145597   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:40.145709   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:40.181340   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:40.181368   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:40.194929   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:40.194999   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:40.222595   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:40.222665   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:42.749068   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:42.760019   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:42.760098   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:42.790868   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:42.790891   92925 cri.go:89] found id: ""
	I1213 19:12:42.790898   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:42.790953   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.794682   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:42.794770   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:42.823001   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:42.823024   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:42.823029   92925 cri.go:89] found id: ""
	I1213 19:12:42.823036   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:42.823102   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.826966   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.830581   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:42.830667   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:42.857298   92925 cri.go:89] found id: ""
	I1213 19:12:42.857325   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.857334   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:42.857340   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:42.857402   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:42.888499   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:42.888524   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:42.888528   92925 cri.go:89] found id: ""
	I1213 19:12:42.888535   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:42.888601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.894724   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.898823   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:42.898944   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:42.925225   92925 cri.go:89] found id: ""
	I1213 19:12:42.925262   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.925271   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:42.925277   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:42.925363   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:42.954151   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:42.954186   92925 cri.go:89] found id: ""
	I1213 19:12:42.954195   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:42.954262   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.958191   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:42.958256   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:42.997632   92925 cri.go:89] found id: ""
	I1213 19:12:42.997699   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.997722   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:42.997738   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:42.997750   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:43.044934   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:43.044968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:43.130707   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:43.130787   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:43.162064   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:43.162196   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:43.174781   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:43.174807   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:43.248282   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:43.239057    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.239785    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.241456    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.242060    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.243778    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:43.239057    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.239785    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.241456    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.242060    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.243778    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:43.248309   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:43.248322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:43.292697   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:43.292729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:43.326878   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:43.326906   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:43.402321   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:43.402356   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:43.434630   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:43.434662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:43.547901   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:43.547940   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.074896   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:46.086088   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:46.086156   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:46.138954   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.138977   92925 cri.go:89] found id: ""
	I1213 19:12:46.138985   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:46.139041   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.142934   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:46.143008   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:46.167983   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:46.168008   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:46.168014   92925 cri.go:89] found id: ""
	I1213 19:12:46.168022   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:46.168083   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.172203   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.176085   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:46.176164   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:46.206474   92925 cri.go:89] found id: ""
	I1213 19:12:46.206501   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.206509   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:46.206515   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:46.206572   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:46.232990   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:46.233047   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:46.233052   92925 cri.go:89] found id: ""
	I1213 19:12:46.233059   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:46.233121   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.236960   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.241098   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:46.241171   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:46.277846   92925 cri.go:89] found id: ""
	I1213 19:12:46.277872   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.277881   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:46.277886   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:46.277945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:46.306293   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:46.306316   92925 cri.go:89] found id: ""
	I1213 19:12:46.306324   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:46.306383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.310146   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:46.310220   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:46.337703   92925 cri.go:89] found id: ""
	I1213 19:12:46.337728   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.337737   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:46.337746   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:46.337757   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:46.433354   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:46.433391   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:46.446062   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:46.446089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.474866   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:46.474894   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:46.518894   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:46.518972   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:46.584190   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:46.584221   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:46.612728   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:46.612798   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:46.693365   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:46.693401   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:46.730005   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:46.730036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:46.805821   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:46.797250    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.797857    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799401    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799906    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.801867    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:46.797250    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.797857    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799401    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799906    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.801867    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:46.805844   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:46.805858   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:46.849142   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:46.849180   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.377325   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:49.388007   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:49.388073   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:49.414745   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:49.414768   92925 cri.go:89] found id: ""
	I1213 19:12:49.414777   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:49.414831   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.418502   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:49.418579   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:49.443751   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:49.443772   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:49.443777   92925 cri.go:89] found id: ""
	I1213 19:12:49.443784   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:49.443864   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.447524   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.450957   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:49.451025   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:49.478284   92925 cri.go:89] found id: ""
	I1213 19:12:49.478309   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.478318   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:49.478324   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:49.478383   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:49.506581   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:49.506604   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:49.506609   92925 cri.go:89] found id: ""
	I1213 19:12:49.506617   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:49.506673   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.513976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.518489   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:49.518567   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:49.545961   92925 cri.go:89] found id: ""
	I1213 19:12:49.545986   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.545995   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:49.546001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:49.546072   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:49.579946   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.579974   92925 cri.go:89] found id: ""
	I1213 19:12:49.579983   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:49.580036   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.583648   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:49.583726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:49.610201   92925 cri.go:89] found id: ""
	I1213 19:12:49.610278   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.610294   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:49.610304   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:49.610321   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:49.682958   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:49.682995   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:49.716028   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:49.716058   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:49.744220   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:49.744248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:49.783347   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:49.783379   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:49.826736   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:49.826770   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:49.860737   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:49.860767   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.894176   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:49.894206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:49.978486   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:49.978525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:50.088530   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:50.088567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:50.107858   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:50.107886   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:50.186950   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:50.178748    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.179306    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.180827    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.181343    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.182902    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:50.178748    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.179306    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.180827    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.181343    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.182902    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:52.687879   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:52.700111   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:52.700185   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:52.727611   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:52.727635   92925 cri.go:89] found id: ""
	I1213 19:12:52.727643   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:52.727699   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.732611   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:52.732683   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:52.760331   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:52.760355   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:52.760361   92925 cri.go:89] found id: ""
	I1213 19:12:52.760369   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:52.760424   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.764203   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.767807   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:52.767880   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:52.794453   92925 cri.go:89] found id: ""
	I1213 19:12:52.794528   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.794552   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:52.794571   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:52.794662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:52.824938   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:52.825046   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:52.825077   92925 cri.go:89] found id: ""
	I1213 19:12:52.825108   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:52.825170   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.828865   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.832644   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:52.832718   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:52.860489   92925 cri.go:89] found id: ""
	I1213 19:12:52.860512   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.860521   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:52.860527   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:52.860588   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:52.886828   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:52.886862   92925 cri.go:89] found id: ""
	I1213 19:12:52.886872   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:52.886940   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.890986   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:52.891106   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:52.917681   92925 cri.go:89] found id: ""
	I1213 19:12:52.917749   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.917776   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:52.917799   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:52.917837   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:52.948506   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:52.948535   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:52.977936   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:52.977963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:53.041212   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:53.041249   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:53.080162   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:53.080189   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:53.174852   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:53.174897   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:53.273766   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:53.273802   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:53.285893   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:53.285925   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:53.352966   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:53.343677    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345158    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345928    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347424    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347925    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:53.343677    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345158    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345928    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347424    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347925    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:53.352990   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:53.353032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:53.391432   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:53.391464   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:53.451329   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:53.451363   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:55.977809   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:55.993375   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:55.993492   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:56.026972   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:56.026993   92925 cri.go:89] found id: ""
	I1213 19:12:56.027001   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:56.027059   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.031128   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:56.031204   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:56.058936   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:56.058958   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:56.058963   92925 cri.go:89] found id: ""
	I1213 19:12:56.058971   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:56.059024   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.062862   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.066757   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:56.066858   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:56.096088   92925 cri.go:89] found id: ""
	I1213 19:12:56.096112   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.096121   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:56.096134   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:56.096196   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:56.138653   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:56.138678   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:56.138683   92925 cri.go:89] found id: ""
	I1213 19:12:56.138691   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:56.138748   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.142767   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.146336   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:56.146413   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:56.176996   92925 cri.go:89] found id: ""
	I1213 19:12:56.177098   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.177115   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:56.177122   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:56.177191   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:56.206318   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:56.206341   92925 cri.go:89] found id: ""
	I1213 19:12:56.206350   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:56.206405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.210085   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:56.210208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:56.240242   92925 cri.go:89] found id: ""
	I1213 19:12:56.240269   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.240278   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:56.240287   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:56.240299   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:56.268772   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:56.268800   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:56.282265   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:56.282293   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:56.334697   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:56.334731   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:56.419986   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:56.420074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:56.466391   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:56.466421   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:56.578289   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:56.578327   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:56.657266   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:56.648227    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.649364    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.650885    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.651401    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.653076    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:56.648227    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.649364    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.650885    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.651401    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.653076    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:56.657289   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:56.657302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:56.685603   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:56.685631   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:56.732451   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:56.732487   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:56.807034   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:56.807068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:59.335877   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:59.346983   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:59.347053   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:59.375213   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:59.375241   92925 cri.go:89] found id: ""
	I1213 19:12:59.375250   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:59.375308   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.379246   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:59.379319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:59.406052   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:59.406073   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:59.406078   92925 cri.go:89] found id: ""
	I1213 19:12:59.406085   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:59.406142   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.409969   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.413744   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:59.413813   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:59.440031   92925 cri.go:89] found id: ""
	I1213 19:12:59.440057   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.440066   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:59.440072   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:59.440131   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:59.470750   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:59.470770   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:59.470775   92925 cri.go:89] found id: ""
	I1213 19:12:59.470782   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:59.470836   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.474671   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.478148   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:59.478230   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:59.532301   92925 cri.go:89] found id: ""
	I1213 19:12:59.532334   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.532344   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:59.532350   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:59.532423   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:59.558719   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:59.558742   92925 cri.go:89] found id: ""
	I1213 19:12:59.558750   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:59.558814   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.562460   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:59.562534   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:59.588851   92925 cri.go:89] found id: ""
	I1213 19:12:59.588916   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.588942   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:59.588964   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:59.589031   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:59.665993   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:59.666032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:59.712805   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:59.712839   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:59.725635   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:59.725688   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:59.797796   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:59.790093    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.790845    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.791906    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.792472    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.794170    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:59.790093    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.790845    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.791906    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.792472    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.794170    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:59.797819   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:59.797831   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:59.825855   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:59.825886   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:59.864251   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:59.864286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:59.890125   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:59.890151   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:59.981337   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:59.981387   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:00.239751   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:00.239799   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:00.366187   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:00.368005   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:02.909028   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:02.919617   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:02.919732   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:02.946548   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:02.946613   92925 cri.go:89] found id: ""
	I1213 19:13:02.946629   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:02.946696   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.950448   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:02.950542   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:02.975550   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:02.975572   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:02.975577   92925 cri.go:89] found id: ""
	I1213 19:13:02.975585   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:02.975645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.979406   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.984704   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:02.984818   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:03.017288   92925 cri.go:89] found id: ""
	I1213 19:13:03.017311   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.017320   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:03.017334   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:03.017393   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:03.048824   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:03.048850   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:03.048857   92925 cri.go:89] found id: ""
	I1213 19:13:03.048864   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:03.048919   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.052630   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.056397   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:03.056521   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:03.088050   92925 cri.go:89] found id: ""
	I1213 19:13:03.088123   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.088146   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:03.088165   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:03.088271   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:03.119709   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:03.119778   92925 cri.go:89] found id: ""
	I1213 19:13:03.119801   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:03.119889   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.127122   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:03.127274   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:03.162913   92925 cri.go:89] found id: ""
	I1213 19:13:03.162936   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.162945   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:03.162953   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:03.162966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:03.207543   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:03.207579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:03.279537   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:03.279575   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:03.314034   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:03.314062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:03.394532   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:03.394567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:03.428318   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:03.428351   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:03.528148   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:03.528187   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:03.626750   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:03.618493    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.619154    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.620764    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.621367    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.622889    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:03.618493    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.619154    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.620764    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.621367    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.622889    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:03.626775   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:03.626788   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:03.685480   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:03.685519   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:03.713856   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:03.713883   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:03.734590   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:03.734620   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:06.266879   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:06.277733   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:06.277799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:06.305175   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:06.305196   92925 cri.go:89] found id: ""
	I1213 19:13:06.305204   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:06.305258   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.308850   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:06.308928   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:06.335153   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:06.335177   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:06.335182   92925 cri.go:89] found id: ""
	I1213 19:13:06.335189   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:06.335246   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.338903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.342418   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:06.342493   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:06.372604   92925 cri.go:89] found id: ""
	I1213 19:13:06.372632   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.372641   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:06.372646   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:06.372707   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:06.402642   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:06.402670   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:06.402675   92925 cri.go:89] found id: ""
	I1213 19:13:06.402682   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:06.402740   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.406787   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.411254   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:06.411335   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:06.437659   92925 cri.go:89] found id: ""
	I1213 19:13:06.437736   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.437751   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:06.437758   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:06.437829   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:06.466702   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:06.466725   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:06.466730   92925 cri.go:89] found id: ""
	I1213 19:13:06.466737   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:06.466793   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.470567   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.474150   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:06.474224   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:06.501494   92925 cri.go:89] found id: ""
	I1213 19:13:06.501569   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.501594   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:06.501617   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:06.501662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:06.544779   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:06.544813   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:06.609379   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:06.609413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:06.637668   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:06.637698   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:06.664078   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:06.664105   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:06.709192   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:06.709225   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:06.737814   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:06.737845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:06.810267   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:06.810302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:06.841843   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:06.841871   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:06.938739   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:06.938776   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:06.951386   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:06.951414   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:07.032986   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:07.025075    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.025642    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027282    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027955    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.029566    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:07.025075    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.025642    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027282    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027955    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.029566    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:07.033040   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:07.033053   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:09.558493   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:09.570604   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:09.570681   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:09.598108   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:09.598133   92925 cri.go:89] found id: ""
	I1213 19:13:09.598141   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:09.598197   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.602596   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:09.602673   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:09.629705   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:09.629727   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:09.629733   92925 cri.go:89] found id: ""
	I1213 19:13:09.629741   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:09.629798   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.634280   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.637817   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:09.637895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:09.665414   92925 cri.go:89] found id: ""
	I1213 19:13:09.665438   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.665447   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:09.665453   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:09.665509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:09.691729   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:09.691754   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:09.691759   92925 cri.go:89] found id: ""
	I1213 19:13:09.691766   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:09.691850   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.696064   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.700204   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:09.700308   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:09.732154   92925 cri.go:89] found id: ""
	I1213 19:13:09.732181   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.732190   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:09.732196   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:09.732277   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:09.760821   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:09.760844   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:09.760849   92925 cri.go:89] found id: ""
	I1213 19:13:09.760856   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:09.760918   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.764697   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.768225   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:09.768299   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:09.796678   92925 cri.go:89] found id: ""
	I1213 19:13:09.796748   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.796773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:09.796797   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:09.796844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:09.892500   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:09.892536   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:09.905527   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:09.905557   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:09.964751   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:09.964785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:10.026858   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:10.026896   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:10.095709   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:10.095747   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:10.135797   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:10.135834   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:10.207467   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:10.198321    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.199090    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.200887    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.201755    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.202624    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:10.198321    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.199090    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.200887    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.201755    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.202624    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:10.207502   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:10.207515   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:10.233202   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:10.233298   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:10.259818   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:10.259845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:10.286455   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:10.286482   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:10.359430   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:10.359465   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:12.894266   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:12.905675   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:12.905773   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:12.932239   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:12.932259   92925 cri.go:89] found id: ""
	I1213 19:13:12.932267   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:12.932320   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.935869   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:12.935938   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:12.961758   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:12.961778   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:12.961782   92925 cri.go:89] found id: ""
	I1213 19:13:12.961789   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:12.961846   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.965449   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.968967   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:12.969071   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:13.001173   92925 cri.go:89] found id: ""
	I1213 19:13:13.001203   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.001213   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:13.001219   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:13.001333   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:13.029728   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:13.029751   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:13.029756   92925 cri.go:89] found id: ""
	I1213 19:13:13.029764   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:13.029818   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.033632   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.037474   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:13.037598   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:13.064000   92925 cri.go:89] found id: ""
	I1213 19:13:13.064025   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.064034   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:13.064040   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:13.064151   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:13.092827   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:13.092847   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:13.092852   92925 cri.go:89] found id: ""
	I1213 19:13:13.092859   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:13.092913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.097637   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.102128   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:13.102195   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:13.132820   92925 cri.go:89] found id: ""
	I1213 19:13:13.132891   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.132912   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:13.132934   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:13.132976   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:13.200851   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:13.200889   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:13.232573   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:13.232603   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:13.325521   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:13.325556   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:13.338293   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:13.338324   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:13.369921   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:13.369950   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:13.416445   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:13.416477   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:13.443214   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:13.443243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:13.468415   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:13.468448   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:13.553200   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:13.553248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:13.596683   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:13.596717   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:13.678127   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:13.669907    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.670748    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672392    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672709    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.674262    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:13.669907    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.670748    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672392    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672709    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.674262    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:13.678150   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:13.678167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.227377   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:16.238613   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:16.238685   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:16.271628   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:16.271652   92925 cri.go:89] found id: ""
	I1213 19:13:16.271661   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:16.271717   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.275571   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:16.275645   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:16.304819   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:16.304843   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.304848   92925 cri.go:89] found id: ""
	I1213 19:13:16.304856   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:16.304911   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.308802   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.312668   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:16.312741   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:16.347113   92925 cri.go:89] found id: ""
	I1213 19:13:16.347137   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.347146   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:16.347153   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:16.347209   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:16.380339   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:16.380362   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:16.380368   92925 cri.go:89] found id: ""
	I1213 19:13:16.380376   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:16.380433   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.383986   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.387756   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:16.387876   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:16.419309   92925 cri.go:89] found id: ""
	I1213 19:13:16.419344   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.419353   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:16.419359   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:16.419427   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:16.447987   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:16.448019   92925 cri.go:89] found id: ""
	I1213 19:13:16.448028   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:16.448093   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.452467   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:16.452551   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:16.478206   92925 cri.go:89] found id: ""
	I1213 19:13:16.478271   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.478298   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:16.478319   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:16.478361   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:16.505859   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:16.505891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:16.547050   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:16.547085   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.591041   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:16.591074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:16.659418   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:16.659502   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:16.686174   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:16.686202   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:16.763753   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:16.763792   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:16.795967   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:16.795996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:16.909202   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:16.909246   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:16.921936   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:16.921962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:16.996415   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:16.987820    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.988740    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990501    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990844    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.992387    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:16.987820    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.988740    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990501    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990844    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.992387    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:16.996438   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:16.996452   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:19.525182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:19.536170   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:19.536246   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:19.563344   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:19.563368   92925 cri.go:89] found id: ""
	I1213 19:13:19.563377   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:19.563432   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.567191   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:19.567263   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:19.594906   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:19.594926   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:19.594936   92925 cri.go:89] found id: ""
	I1213 19:13:19.594944   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:19.595012   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.599420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.603163   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:19.603240   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:19.636656   92925 cri.go:89] found id: ""
	I1213 19:13:19.636681   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.636690   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:19.636696   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:19.636753   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:19.667204   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:19.667274   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:19.667292   92925 cri.go:89] found id: ""
	I1213 19:13:19.667316   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:19.667395   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.671184   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.674972   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:19.675041   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:19.704947   92925 cri.go:89] found id: ""
	I1213 19:13:19.704971   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.704980   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:19.704988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:19.705073   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:19.730669   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:19.730691   92925 cri.go:89] found id: ""
	I1213 19:13:19.730699   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:19.730771   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.735384   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:19.735477   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:19.760611   92925 cri.go:89] found id: ""
	I1213 19:13:19.760634   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.760643   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:19.760669   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:19.760686   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:19.788592   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:19.788621   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:19.882694   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:19.882730   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:19.954514   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:19.946675    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.947253    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.948589    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.949210    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.950900    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:19.946675    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.947253    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.948589    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.949210    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.950900    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:19.954535   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:19.954550   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:19.980616   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:19.980694   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:20.035895   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:20.035930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:20.104716   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:20.104768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:20.199665   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:20.199701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:20.234652   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:20.234680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:20.248416   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:20.248444   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:20.296588   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:20.296624   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:22.824017   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:22.838193   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:22.838267   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:22.874481   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:22.874503   92925 cri.go:89] found id: ""
	I1213 19:13:22.874512   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:22.874578   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.878378   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:22.878467   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:22.907053   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:22.907075   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:22.907079   92925 cri.go:89] found id: ""
	I1213 19:13:22.907086   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:22.907143   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.911144   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.914933   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:22.915007   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:22.942646   92925 cri.go:89] found id: ""
	I1213 19:13:22.942714   92925 logs.go:282] 0 containers: []
	W1213 19:13:22.942729   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:22.942736   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:22.942797   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:22.969713   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:22.969735   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:22.969740   92925 cri.go:89] found id: ""
	I1213 19:13:22.969748   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:22.969804   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.973708   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.977426   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:22.977514   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:23.007912   92925 cri.go:89] found id: ""
	I1213 19:13:23.007939   92925 logs.go:282] 0 containers: []
	W1213 19:13:23.007948   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:23.007955   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:23.008018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:23.040260   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:23.040284   92925 cri.go:89] found id: ""
	I1213 19:13:23.040293   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:23.040348   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:23.044273   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:23.044348   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:23.073414   92925 cri.go:89] found id: ""
	I1213 19:13:23.073445   92925 logs.go:282] 0 containers: []
	W1213 19:13:23.073454   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:23.073466   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:23.073478   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:23.147486   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:23.147526   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:23.180397   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:23.180426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:23.262279   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:23.253482    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.254529    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.255324    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.256834    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.257439    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:23.253482    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.254529    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.255324    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.256834    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.257439    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:23.262302   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:23.262318   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:23.288912   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:23.288942   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:23.328328   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:23.328366   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:23.421984   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:23.422020   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:23.524961   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:23.524997   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:23.542790   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:23.542821   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:23.591486   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:23.591522   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:23.621748   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:23.621777   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.152673   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:26.164673   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:26.164740   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:26.192010   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:26.192031   92925 cri.go:89] found id: ""
	I1213 19:13:26.192040   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:26.192095   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.195849   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:26.195918   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:26.224593   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:26.224657   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:26.224677   92925 cri.go:89] found id: ""
	I1213 19:13:26.224702   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:26.224772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.228545   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.231970   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:26.232086   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:26.259044   92925 cri.go:89] found id: ""
	I1213 19:13:26.259066   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.259075   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:26.259080   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:26.259137   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:26.287771   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:26.287793   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:26.287798   92925 cri.go:89] found id: ""
	I1213 19:13:26.287805   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:26.287861   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.293156   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.296722   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:26.296805   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:26.323701   92925 cri.go:89] found id: ""
	I1213 19:13:26.323731   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.323746   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:26.323753   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:26.323820   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:26.350119   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.350137   92925 cri.go:89] found id: ""
	I1213 19:13:26.350145   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:26.350199   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.353849   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:26.353916   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:26.380009   92925 cri.go:89] found id: ""
	I1213 19:13:26.380035   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.380044   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:26.380053   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:26.380065   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:26.438029   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:26.438062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:26.475066   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:26.475096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:26.507857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:26.507887   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:26.521466   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:26.521493   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:26.565942   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:26.565983   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:26.634647   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:26.634680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.662943   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:26.662972   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:26.737712   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:26.737749   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:26.840754   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:26.840792   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:26.911511   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:26.903881    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.904637    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906164    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906441    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.907906    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:26.903881    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.904637    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906164    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906441    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.907906    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:26.911534   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:26.911547   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.438403   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:29.449664   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:29.449742   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:29.477323   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.477342   92925 cri.go:89] found id: ""
	I1213 19:13:29.477351   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:29.477405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.480946   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:29.481052   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:29.515446   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:29.515469   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:29.515473   92925 cri.go:89] found id: ""
	I1213 19:13:29.515480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:29.515537   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.520209   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.523894   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:29.523994   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:29.550207   92925 cri.go:89] found id: ""
	I1213 19:13:29.550232   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.550242   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:29.550272   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:29.550349   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:29.576154   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:29.576177   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:29.576182   92925 cri.go:89] found id: ""
	I1213 19:13:29.576195   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:29.576267   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.580154   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.583801   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:29.583876   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:29.613771   92925 cri.go:89] found id: ""
	I1213 19:13:29.613795   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.613805   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:29.613810   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:29.613872   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:29.640080   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:29.640103   92925 cri.go:89] found id: ""
	I1213 19:13:29.640112   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:29.640167   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.643810   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:29.643883   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:29.674496   92925 cri.go:89] found id: ""
	I1213 19:13:29.674567   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.674583   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:29.674592   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:29.674616   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.704354   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:29.704383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:29.760688   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:29.760724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:29.789616   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:29.789644   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:29.817300   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:29.817328   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:29.848838   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:29.848866   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:29.949492   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:29.949527   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:30.081487   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:30.081528   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:30.170948   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:30.170989   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:30.251666   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:30.251705   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:30.265404   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:30.265433   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:30.340984   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:30.332491    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.333283    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335347    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335760    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.337330    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:30.332491    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.333283    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335347    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335760    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.337330    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:32.841244   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:32.851830   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:32.851904   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:32.878262   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:32.878282   92925 cri.go:89] found id: ""
	I1213 19:13:32.878290   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:32.878345   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.881794   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:32.881871   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:32.908784   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:32.908807   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:32.908812   92925 cri.go:89] found id: ""
	I1213 19:13:32.908819   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:32.908877   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.913113   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.916615   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:32.916713   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:32.945436   92925 cri.go:89] found id: ""
	I1213 19:13:32.945460   92925 logs.go:282] 0 containers: []
	W1213 19:13:32.945468   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:32.945474   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:32.945532   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:32.972389   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:32.972409   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:32.972414   92925 cri.go:89] found id: ""
	I1213 19:13:32.972421   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:32.972496   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.976105   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.979491   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:32.979558   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:33.013568   92925 cri.go:89] found id: ""
	I1213 19:13:33.013590   92925 logs.go:282] 0 containers: []
	W1213 19:13:33.013598   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:33.013604   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:33.013662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:33.041534   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:33.041557   92925 cri.go:89] found id: ""
	I1213 19:13:33.041566   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:33.041622   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:33.045294   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:33.045445   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:33.074126   92925 cri.go:89] found id: ""
	I1213 19:13:33.074196   92925 logs.go:282] 0 containers: []
	W1213 19:13:33.074224   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:33.074248   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:33.074274   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:33.108085   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:33.108112   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:33.196053   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:33.196096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:33.238729   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:33.238801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:33.334220   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:33.334258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:33.347401   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:33.347431   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:33.415328   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:33.415362   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:33.444593   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:33.444672   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:33.519042   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:33.509468    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.510273    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.511953    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.512620    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.513636    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:33.509468    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.510273    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.511953    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.512620    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.513636    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:33.519066   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:33.519078   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:33.546564   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:33.546593   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:33.588382   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:33.588418   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.135267   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:36.146588   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:36.146662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:36.173719   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:36.173741   92925 cri.go:89] found id: ""
	I1213 19:13:36.173750   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:36.173821   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.177610   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:36.177680   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:36.204513   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:36.204536   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.204540   92925 cri.go:89] found id: ""
	I1213 19:13:36.204548   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:36.204602   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.208516   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.211831   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:36.211901   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:36.243167   92925 cri.go:89] found id: ""
	I1213 19:13:36.243194   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.243205   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:36.243211   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:36.243271   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:36.272787   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:36.272812   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:36.272817   92925 cri.go:89] found id: ""
	I1213 19:13:36.272825   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:36.272880   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.276627   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.280060   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:36.280182   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:36.309203   92925 cri.go:89] found id: ""
	I1213 19:13:36.309231   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.309242   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:36.309248   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:36.309310   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:36.342531   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:36.342554   92925 cri.go:89] found id: ""
	I1213 19:13:36.342563   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:36.342631   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.346318   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:36.346392   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:36.374406   92925 cri.go:89] found id: ""
	I1213 19:13:36.374442   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.374467   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:36.374485   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:36.374497   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:36.474302   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:36.474340   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:36.557406   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:36.549415    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.550022    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551319    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551900    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.553579    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:36.549415    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.550022    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551319    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551900    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.553579    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:36.557430   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:36.557443   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:36.583387   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:36.583415   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:36.623378   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:36.623413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.666931   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:36.666964   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:36.696482   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:36.696513   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:36.730677   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:36.730708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:36.743357   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:36.743386   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:36.813864   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:36.813900   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:36.848686   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:36.848716   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:39.433464   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:39.444066   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:39.444136   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:39.471666   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:39.471686   92925 cri.go:89] found id: ""
	I1213 19:13:39.471693   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:39.471753   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.475549   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:39.475641   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:39.505541   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:39.505615   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:39.505645   92925 cri.go:89] found id: ""
	I1213 19:13:39.505667   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:39.505752   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.511310   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.515781   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:39.515898   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:39.545256   92925 cri.go:89] found id: ""
	I1213 19:13:39.545290   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.545300   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:39.545306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:39.545379   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:39.576057   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:39.576080   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:39.576085   92925 cri.go:89] found id: ""
	I1213 19:13:39.576092   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:39.576146   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.580177   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.584087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:39.584160   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:39.610819   92925 cri.go:89] found id: ""
	I1213 19:13:39.610843   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.610863   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:39.610871   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:39.610929   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:39.638458   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:39.638481   92925 cri.go:89] found id: ""
	I1213 19:13:39.638503   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:39.638564   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.642537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:39.642610   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:39.670872   92925 cri.go:89] found id: ""
	I1213 19:13:39.670951   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.670975   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:39.670998   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:39.671043   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:39.774702   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:39.774743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:39.846826   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:39.837968    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.838545    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.840574    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.841359    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.842988    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:39.837968    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.838545    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.840574    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.841359    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.842988    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:39.846849   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:39.846862   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:39.892712   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:39.892743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:39.960690   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:39.960729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:40.022528   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:40.022560   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:40.107424   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:40.107461   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:40.149433   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:40.149472   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:40.162446   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:40.162479   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:40.191980   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:40.192009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:40.239148   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:40.239228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:42.771936   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:42.782654   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:42.782726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:42.808850   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:42.808869   92925 cri.go:89] found id: ""
	I1213 19:13:42.808877   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:42.808938   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.812682   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:42.812753   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:42.840980   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:42.841072   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:42.841097   92925 cri.go:89] found id: ""
	I1213 19:13:42.841122   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:42.841210   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.844946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.848726   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:42.848811   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:42.888597   92925 cri.go:89] found id: ""
	I1213 19:13:42.888663   92925 logs.go:282] 0 containers: []
	W1213 19:13:42.888688   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:42.888707   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:42.888791   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:42.916253   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:42.916323   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:42.916341   92925 cri.go:89] found id: ""
	I1213 19:13:42.916364   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:42.916443   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.920031   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.923493   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:42.923565   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:42.950967   92925 cri.go:89] found id: ""
	I1213 19:13:42.950991   92925 logs.go:282] 0 containers: []
	W1213 19:13:42.950999   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:42.951005   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:42.951062   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:42.977861   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:42.977884   92925 cri.go:89] found id: ""
	I1213 19:13:42.977892   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:42.977946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.985150   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:42.985252   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:43.014767   92925 cri.go:89] found id: ""
	I1213 19:13:43.014794   92925 logs.go:282] 0 containers: []
	W1213 19:13:43.014803   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:43.014813   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:43.014826   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:43.089031   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:43.089070   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:43.152812   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:43.152840   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:43.253685   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:43.253720   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:43.268102   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:43.268130   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:43.342529   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:43.333442    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.333905    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.335923    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.336467    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.338397    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:43.333442    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.333905    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.335923    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.336467    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.338397    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:43.342553   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:43.342566   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:43.383957   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:43.383996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:43.431627   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:43.431662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:43.504349   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:43.504386   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:43.541135   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:43.541167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:43.570288   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:43.570315   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.101243   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:46.114537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:46.114605   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:46.142285   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:46.142310   92925 cri.go:89] found id: ""
	I1213 19:13:46.142319   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:46.142374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.146198   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:46.146275   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:46.172413   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:46.172485   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:46.172504   92925 cri.go:89] found id: ""
	I1213 19:13:46.172529   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:46.172649   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.176629   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.180398   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:46.180514   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:46.208892   92925 cri.go:89] found id: ""
	I1213 19:13:46.208925   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.208934   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:46.208942   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:46.209074   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:46.237365   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:46.237388   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:46.237394   92925 cri.go:89] found id: ""
	I1213 19:13:46.237401   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:46.237458   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.241815   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.245384   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:46.245482   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:46.272996   92925 cri.go:89] found id: ""
	I1213 19:13:46.273063   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.273072   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:46.273078   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:46.273160   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:46.302629   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.302654   92925 cri.go:89] found id: ""
	I1213 19:13:46.302663   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:46.302737   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.306762   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:46.306861   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:46.337280   92925 cri.go:89] found id: ""
	I1213 19:13:46.337346   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.337369   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:46.337384   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:46.337395   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:46.349174   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:46.349204   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:46.419942   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:46.411077    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.411612    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413348    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413991    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.415827    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:46.411077    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.411612    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413348    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413991    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.415827    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:46.419977   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:46.419993   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:46.446859   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:46.446885   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:46.487087   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:46.487124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:46.547232   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:46.547267   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:46.574826   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:46.574854   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.602584   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:46.602609   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:46.640086   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:46.640117   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:46.740777   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:46.740818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:46.812315   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:46.812357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:49.395199   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:49.405934   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:49.406009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:49.433789   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:49.433810   92925 cri.go:89] found id: ""
	I1213 19:13:49.433827   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:49.433883   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.437578   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:49.437651   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:49.471711   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:49.471734   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:49.471740   92925 cri.go:89] found id: ""
	I1213 19:13:49.471748   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:49.471801   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.475461   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.479094   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:49.479168   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:49.505391   92925 cri.go:89] found id: ""
	I1213 19:13:49.505417   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.505426   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:49.505433   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:49.505488   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:49.540863   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:49.540890   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:49.540895   92925 cri.go:89] found id: ""
	I1213 19:13:49.540903   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:49.540960   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.544771   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.548451   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:49.548524   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:49.575402   92925 cri.go:89] found id: ""
	I1213 19:13:49.575428   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.575436   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:49.575442   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:49.575501   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:49.605123   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:49.605143   92925 cri.go:89] found id: ""
	I1213 19:13:49.605151   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:49.605211   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.608919   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:49.609061   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:49.637050   92925 cri.go:89] found id: ""
	I1213 19:13:49.637075   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.637084   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:49.637093   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:49.637105   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:49.744000   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:49.744048   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:49.811345   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:49.802050    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.802444    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805468    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805922    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.807507    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:49.802050    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.802444    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805468    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805922    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.807507    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:49.811370   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:49.811384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:49.852043   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:49.852081   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:49.896314   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:49.896349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:49.924211   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:49.924240   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:50.006219   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:50.006263   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:50.039895   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:50.039978   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:50.054629   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:50.054656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:50.084937   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:50.084966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:50.159510   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:50.159553   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:52.688326   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:52.699486   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:52.699554   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:52.726195   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:52.726216   92925 cri.go:89] found id: ""
	I1213 19:13:52.726224   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:52.726280   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.730715   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:52.730785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:52.756911   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:52.756933   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:52.756938   92925 cri.go:89] found id: ""
	I1213 19:13:52.756946   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:52.757069   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.760788   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.764452   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:52.764551   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:52.790658   92925 cri.go:89] found id: ""
	I1213 19:13:52.790732   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.790749   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:52.790756   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:52.790816   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:52.818365   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:52.818388   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:52.818394   92925 cri.go:89] found id: ""
	I1213 19:13:52.818402   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:52.818477   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.822460   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.826054   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:52.826130   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:52.853218   92925 cri.go:89] found id: ""
	I1213 19:13:52.853245   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.853256   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:52.853262   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:52.853321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:52.879712   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:52.879736   92925 cri.go:89] found id: ""
	I1213 19:13:52.879744   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:52.879798   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.883563   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:52.883639   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:52.910499   92925 cri.go:89] found id: ""
	I1213 19:13:52.910526   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.910535   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:52.910545   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:52.910577   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:52.990183   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:52.990219   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:53.026776   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:53.026805   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:53.118043   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:53.107629    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.110332    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.111160    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.112144    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.113182    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:53.107629    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.110332    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.111160    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.112144    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.113182    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:53.118090   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:53.118141   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:53.160995   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:53.161190   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:53.204763   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:53.204795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:53.270772   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:53.270810   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:53.370857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:53.370895   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:53.383046   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:53.383074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:53.410648   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:53.410684   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:53.439739   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:53.439768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:55.970243   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:55.981613   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:55.981689   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:56.018614   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:56.018637   92925 cri.go:89] found id: ""
	I1213 19:13:56.018647   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:56.018707   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.022914   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:56.022990   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:56.056158   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:56.056182   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:56.056187   92925 cri.go:89] found id: ""
	I1213 19:13:56.056194   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:56.056275   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.061504   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.065201   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:56.065281   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:56.094861   92925 cri.go:89] found id: ""
	I1213 19:13:56.094887   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.094896   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:56.094903   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:56.094982   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:56.133165   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:56.133240   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:56.133260   92925 cri.go:89] found id: ""
	I1213 19:13:56.133291   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:56.133356   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.137225   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.140713   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:56.140785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:56.168013   92925 cri.go:89] found id: ""
	I1213 19:13:56.168039   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.168048   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:56.168055   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:56.168118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:56.196793   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:56.196867   92925 cri.go:89] found id: ""
	I1213 19:13:56.196876   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:56.196935   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.200591   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:56.200672   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:56.227851   92925 cri.go:89] found id: ""
	I1213 19:13:56.227877   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.227887   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:56.227896   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:56.227908   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:56.323380   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:56.323416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:56.337259   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:56.337289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:56.362908   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:56.362939   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:56.443333   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:56.443372   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:56.522467   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:56.511318    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.512215    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.514040    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.515835    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.516378    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:56.511318    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.512215    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.514040    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.515835    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.516378    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:56.522485   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:56.522498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:56.561809   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:56.561843   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:56.606943   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:56.606979   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:56.678268   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:56.678310   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:56.707280   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:56.707309   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:56.736890   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:56.736917   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:59.286954   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:59.298376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:59.298447   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:59.325376   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:59.325399   92925 cri.go:89] found id: ""
	I1213 19:13:59.325407   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:59.325464   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.329049   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:59.329123   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:59.356066   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:59.356085   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:59.356089   92925 cri.go:89] found id: ""
	I1213 19:13:59.356097   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:59.356150   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.360113   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.363660   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:59.363736   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:59.389568   92925 cri.go:89] found id: ""
	I1213 19:13:59.389594   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.389604   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:59.389611   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:59.389692   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:59.423243   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:59.423266   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:59.423270   92925 cri.go:89] found id: ""
	I1213 19:13:59.423278   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:59.423350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.426944   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.431770   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:59.431844   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:59.458103   92925 cri.go:89] found id: ""
	I1213 19:13:59.458173   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.458220   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:59.458246   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:59.458332   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:59.487250   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:59.487324   92925 cri.go:89] found id: ""
	I1213 19:13:59.487340   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:59.487406   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.491784   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:59.491852   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:59.525717   92925 cri.go:89] found id: ""
	I1213 19:13:59.525739   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.525748   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:59.525756   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:59.525768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:59.554063   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:59.554091   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:59.599874   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:59.599909   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:59.626733   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:59.626765   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:59.700778   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:59.700814   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:59.713358   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:59.713388   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:59.783137   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:59.774677   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.775356   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.776867   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.777580   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.778486   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:59.774677   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.775356   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.776867   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.777580   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.778486   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:59.783158   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:59.783169   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:59.832218   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:59.832248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:59.901253   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:59.901329   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:59.930678   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:59.930701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:59.962070   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:59.962099   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:02.744450   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:02.755514   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:02.755587   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:02.782984   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:02.783079   92925 cri.go:89] found id: ""
	I1213 19:14:02.783095   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:02.783157   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.787187   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:02.787262   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:02.814931   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:02.814954   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:02.814959   92925 cri.go:89] found id: ""
	I1213 19:14:02.814967   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:02.815031   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.818983   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.822788   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:02.822865   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:02.848942   92925 cri.go:89] found id: ""
	I1213 19:14:02.848966   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.848975   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:02.848991   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:02.849096   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:02.876134   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:02.876155   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:02.876160   92925 cri.go:89] found id: ""
	I1213 19:14:02.876168   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:02.876249   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.880576   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.885335   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:02.885459   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:02.913660   92925 cri.go:89] found id: ""
	I1213 19:14:02.913733   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.913763   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:02.913802   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:02.913924   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:02.940178   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:02.940248   92925 cri.go:89] found id: ""
	I1213 19:14:02.940270   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:02.940359   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.944376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:02.944500   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:02.975815   92925 cri.go:89] found id: ""
	I1213 19:14:02.975838   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.975846   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:02.975855   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:02.975867   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:03.074688   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:03.074723   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:03.156277   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:03.147816   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.148501   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150174   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150777   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.152270   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:03.147816   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.148501   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150174   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150777   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.152270   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:03.156299   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:03.156311   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:03.182450   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:03.182477   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:03.221147   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:03.221181   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:03.292920   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:03.292962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:03.323958   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:03.323983   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:03.397255   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:03.397289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:03.410296   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:03.410325   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:03.465930   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:03.465966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:03.497989   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:03.498017   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:06.058798   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:06.069576   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:06.069643   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:06.097652   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:06.097675   92925 cri.go:89] found id: ""
	I1213 19:14:06.097684   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:06.097767   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.103860   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:06.103983   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:06.133321   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:06.133354   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:06.133359   92925 cri.go:89] found id: ""
	I1213 19:14:06.133367   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:06.133434   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.137349   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.140932   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:06.141036   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:06.174768   92925 cri.go:89] found id: ""
	I1213 19:14:06.174796   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.174806   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:06.174813   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:06.174923   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:06.202214   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:06.202245   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:06.202249   92925 cri.go:89] found id: ""
	I1213 19:14:06.202257   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:06.202315   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.206201   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.209869   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:06.209950   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:06.240738   92925 cri.go:89] found id: ""
	I1213 19:14:06.240762   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.240771   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:06.240777   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:06.240838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:06.267045   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:06.267067   92925 cri.go:89] found id: ""
	I1213 19:14:06.267076   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:06.267134   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.270950   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:06.271059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:06.298538   92925 cri.go:89] found id: ""
	I1213 19:14:06.298566   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.298576   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:06.298585   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:06.298600   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:06.401303   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:06.401348   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:06.414599   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:06.414631   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:06.441984   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:06.442056   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:06.481290   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:06.481321   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:06.541131   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:06.541162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:06.614944   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:06.614978   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:06.700895   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:06.700937   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:06.734007   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:06.734036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:06.804578   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:06.795862   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.796443   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798255   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798765   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.800521   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:06.795862   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.796443   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798255   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798765   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.800521   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:06.804604   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:06.804616   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:06.832247   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:06.832275   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.358770   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:09.369376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:09.369446   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:09.397174   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:09.397250   92925 cri.go:89] found id: ""
	I1213 19:14:09.397268   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:09.397341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.401282   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:09.401379   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:09.430806   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:09.430829   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:09.430834   92925 cri.go:89] found id: ""
	I1213 19:14:09.430842   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:09.430895   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.434593   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.437861   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:09.437931   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:09.462972   92925 cri.go:89] found id: ""
	I1213 19:14:09.463040   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.463067   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:09.463087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:09.463154   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:09.489906   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:09.489930   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:09.489935   92925 cri.go:89] found id: ""
	I1213 19:14:09.489943   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:09.490000   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.493996   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.497780   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:09.497895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:09.529207   92925 cri.go:89] found id: ""
	I1213 19:14:09.529232   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.529241   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:09.529280   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:09.529364   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:09.556267   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.556289   92925 cri.go:89] found id: ""
	I1213 19:14:09.556297   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:09.556383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.560687   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:09.560770   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:09.592345   92925 cri.go:89] found id: ""
	I1213 19:14:09.592380   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.592389   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:09.592398   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:09.592410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:09.604889   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:09.604917   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:09.631468   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:09.631498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:09.670679   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:09.670712   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:09.715815   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:09.715851   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.743494   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:09.743523   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:09.775725   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:09.775753   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:09.873965   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:09.874039   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:09.959605   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:09.948036   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.948708   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950229   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950803   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.952453   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:09.948036   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.948708   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950229   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950803   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.952453   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:09.959680   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:09.959707   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:10.051190   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:10.051228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:10.086712   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:10.086738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:12.672644   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:12.683960   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:12.684058   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:12.712689   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:12.712710   92925 cri.go:89] found id: ""
	I1213 19:14:12.712718   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:12.712772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.716732   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:12.716806   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:12.744449   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:12.744468   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:12.744473   92925 cri.go:89] found id: ""
	I1213 19:14:12.744480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:12.744548   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.748558   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.752120   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:12.752195   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:12.779575   92925 cri.go:89] found id: ""
	I1213 19:14:12.779602   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.779611   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:12.779617   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:12.779677   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:12.808259   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:12.808279   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:12.808284   92925 cri.go:89] found id: ""
	I1213 19:14:12.808292   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:12.808348   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.812274   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.816250   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:12.816380   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:12.842528   92925 cri.go:89] found id: ""
	I1213 19:14:12.842556   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.842566   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:12.842572   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:12.842655   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:12.870846   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:12.870916   92925 cri.go:89] found id: ""
	I1213 19:14:12.870939   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:12.871003   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.874709   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:12.874809   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:12.901168   92925 cri.go:89] found id: ""
	I1213 19:14:12.901194   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.901203   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:12.901212   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:12.901224   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:12.993856   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:12.993888   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:13.006289   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:13.006320   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:13.038515   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:13.038544   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:13.101746   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:13.101795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:13.153697   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:13.153736   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:13.183337   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:13.183366   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:13.262960   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:13.262995   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:13.297818   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:13.297845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:13.368622   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:13.360485   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.361349   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363057   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363352   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.364843   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:13.360485   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.361349   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363057   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363352   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.364843   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:13.368650   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:13.368664   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:13.439804   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:13.439843   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:15.976229   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:15.989077   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:15.989247   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:16.020054   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:16.020079   92925 cri.go:89] found id: ""
	I1213 19:14:16.020087   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:16.020158   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.024026   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:16.024118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:16.051647   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:16.051670   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:16.051681   92925 cri.go:89] found id: ""
	I1213 19:14:16.051688   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:16.051772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.055489   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.059115   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:16.059234   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:16.086414   92925 cri.go:89] found id: ""
	I1213 19:14:16.086438   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.086447   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:16.086453   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:16.086513   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:16.118349   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:16.118415   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:16.118434   92925 cri.go:89] found id: ""
	I1213 19:14:16.118458   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:16.118545   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.122398   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.129488   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:16.129561   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:16.156699   92925 cri.go:89] found id: ""
	I1213 19:14:16.156725   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.156734   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:16.156740   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:16.156799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:16.183419   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:16.183444   92925 cri.go:89] found id: ""
	I1213 19:14:16.183465   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:16.183520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.187500   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:16.187599   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:16.213532   92925 cri.go:89] found id: ""
	I1213 19:14:16.213610   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.213634   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:16.213657   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:16.213703   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:16.225956   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:16.225985   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:16.299377   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:16.290117   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.291089   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.292835   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.293694   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.295412   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:16.290117   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.291089   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.292835   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.293694   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.295412   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:16.299401   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:16.299416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:16.327259   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:16.327288   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:16.353346   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:16.353376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:16.380053   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:16.380079   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:16.415886   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:16.415918   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:16.512571   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:16.512605   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:16.557415   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:16.557451   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:16.616391   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:16.616424   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:16.692096   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:16.692131   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:19.277525   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:19.287988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:19.288109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:19.314035   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:19.314055   92925 cri.go:89] found id: ""
	I1213 19:14:19.314064   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:19.314137   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.317785   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:19.317856   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:19.344128   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:19.344151   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:19.344155   92925 cri.go:89] found id: ""
	I1213 19:14:19.344163   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:19.344216   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.348619   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.351872   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:19.351961   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:19.377237   92925 cri.go:89] found id: ""
	I1213 19:14:19.377263   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.377272   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:19.377278   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:19.377360   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:19.404210   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:19.404233   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:19.404238   92925 cri.go:89] found id: ""
	I1213 19:14:19.404245   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:19.404318   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.407909   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.411268   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:19.411336   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:19.437051   92925 cri.go:89] found id: ""
	I1213 19:14:19.437075   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.437083   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:19.437089   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:19.437147   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:19.461816   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:19.461847   92925 cri.go:89] found id: ""
	I1213 19:14:19.461856   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:19.461911   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.465492   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:19.465587   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:19.491501   92925 cri.go:89] found id: ""
	I1213 19:14:19.491527   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.491536   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:19.491545   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:19.491588   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:19.530624   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:19.530652   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:19.570388   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:19.570423   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:19.649601   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:19.649638   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:19.682548   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:19.682579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:19.765347   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:19.765383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:19.797401   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:19.797430   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:19.892983   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:19.893036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:19.905252   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:19.905281   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:19.976038   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:19.968048   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.968518   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.969788   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.970473   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.972132   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:19.968048   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.968518   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.969788   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.970473   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.972132   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:19.976061   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:19.976074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:20.015893   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:20.015932   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:22.580793   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:22.591726   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:22.591801   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:22.617941   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:22.617972   92925 cri.go:89] found id: ""
	I1213 19:14:22.617981   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:22.618039   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.621895   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:22.621967   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:22.648715   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:22.648778   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:22.648797   92925 cri.go:89] found id: ""
	I1213 19:14:22.648821   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:22.648904   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.653305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.657032   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:22.657104   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:22.686906   92925 cri.go:89] found id: ""
	I1213 19:14:22.686932   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.686946   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:22.686952   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:22.687013   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:22.714929   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:22.714951   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:22.714956   92925 cri.go:89] found id: ""
	I1213 19:14:22.714964   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:22.715025   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.719071   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.722714   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:22.722784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:22.750440   92925 cri.go:89] found id: ""
	I1213 19:14:22.750470   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.750480   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:22.750486   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:22.750549   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:22.777550   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:22.777572   92925 cri.go:89] found id: ""
	I1213 19:14:22.777580   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:22.777635   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.781380   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:22.781475   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:22.816511   92925 cri.go:89] found id: ""
	I1213 19:14:22.816537   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.816547   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:22.816572   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:22.816617   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:22.842295   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:22.842322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:22.882060   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:22.882095   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:22.965336   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:22.965374   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:22.995696   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:22.995731   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:23.098694   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:23.098782   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:23.117712   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:23.117743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:23.167456   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:23.167497   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:23.195171   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:23.195199   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:23.279228   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:23.279264   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:23.318709   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:23.318738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:23.384532   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:23.376056   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.376628   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.378283   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379367   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379806   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:23.376056   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.376628   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.378283   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379367   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379806   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:25.885566   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:25.896623   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:25.896696   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:25.924503   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:25.924535   92925 cri.go:89] found id: ""
	I1213 19:14:25.924544   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:25.924601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.928341   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:25.928413   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:25.966385   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:25.966404   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:25.966409   92925 cri.go:89] found id: ""
	I1213 19:14:25.966417   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:25.966471   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.970190   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.974101   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:25.974229   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:26.004380   92925 cri.go:89] found id: ""
	I1213 19:14:26.004456   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.004479   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:26.004498   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:26.004595   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:26.031828   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:26.031853   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:26.031860   92925 cri.go:89] found id: ""
	I1213 19:14:26.031868   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:26.031925   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.036387   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.040161   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:26.040235   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:26.070525   92925 cri.go:89] found id: ""
	I1213 19:14:26.070591   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.070616   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:26.070635   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:26.070724   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:26.108253   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:26.108277   92925 cri.go:89] found id: ""
	I1213 19:14:26.108294   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:26.108373   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.112191   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:26.112324   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:26.146018   92925 cri.go:89] found id: ""
	I1213 19:14:26.146042   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.146052   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:26.146060   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:26.146094   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:26.187197   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:26.187229   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:26.232694   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:26.232724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:26.310398   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:26.310435   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:26.323748   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:26.323775   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:26.350662   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:26.350689   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:26.380636   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:26.380707   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:26.407064   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:26.407089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:26.483950   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:26.483984   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:26.536817   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:26.536846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:26.654750   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:26.654801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:26.733679   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:26.725319   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.726046   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.727714   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.728228   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.729870   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:26.725319   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.726046   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.727714   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.728228   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.729870   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:29.233968   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:29.244666   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:29.244746   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:29.272994   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:29.273043   92925 cri.go:89] found id: ""
	I1213 19:14:29.273051   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:29.273108   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.277950   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:29.278022   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:29.304315   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:29.304334   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:29.304338   92925 cri.go:89] found id: ""
	I1213 19:14:29.304346   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:29.304402   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.308379   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.311905   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:29.311974   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:29.337925   92925 cri.go:89] found id: ""
	I1213 19:14:29.337953   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.337962   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:29.337968   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:29.338028   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:29.365135   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:29.365156   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:29.365160   92925 cri.go:89] found id: ""
	I1213 19:14:29.365167   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:29.365222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.368867   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.372263   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:29.372334   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:29.403367   92925 cri.go:89] found id: ""
	I1213 19:14:29.403393   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.403402   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:29.403408   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:29.403466   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:29.429639   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:29.429703   92925 cri.go:89] found id: ""
	I1213 19:14:29.429718   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:29.429782   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.433301   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:29.433373   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:29.460244   92925 cri.go:89] found id: ""
	I1213 19:14:29.460272   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.460282   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:29.460291   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:29.460302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:29.555127   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:29.555166   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:29.583790   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:29.583827   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:29.646377   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:29.646409   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:29.720554   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:29.720592   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:29.751659   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:29.751686   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:29.788857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:29.788883   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:29.800809   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:29.800844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:29.869250   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:29.862112   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.862682   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864146   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864555   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.865755   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:29.862112   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.862682   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864146   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864555   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.865755   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:29.869274   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:29.869287   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:29.913688   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:29.913724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:29.956382   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:29.956408   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:32.553678   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:32.565396   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:32.565470   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:32.592588   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:32.592613   92925 cri.go:89] found id: ""
	I1213 19:14:32.592622   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:32.592684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.596429   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:32.596509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:32.624469   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:32.624493   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:32.624499   92925 cri.go:89] found id: ""
	I1213 19:14:32.624506   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:32.624559   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.628270   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.631873   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:32.632003   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:32.657120   92925 cri.go:89] found id: ""
	I1213 19:14:32.657144   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.657153   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:32.657159   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:32.657220   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:32.684878   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:32.684901   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:32.684906   92925 cri.go:89] found id: ""
	I1213 19:14:32.684914   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:32.684976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.689235   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.692754   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:32.692825   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:32.722855   92925 cri.go:89] found id: ""
	I1213 19:14:32.722878   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.722887   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:32.722893   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:32.722952   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:32.753685   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:32.753704   92925 cri.go:89] found id: ""
	I1213 19:14:32.753712   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:32.753764   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.758129   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:32.758214   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:32.784526   92925 cri.go:89] found id: ""
	I1213 19:14:32.784599   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.784623   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:32.784645   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:32.784683   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:32.826015   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:32.826050   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:32.915444   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:32.915483   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:32.943132   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:32.943167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:33.017904   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:33.017945   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:33.050228   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:33.050258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:33.122559   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:33.114436   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.115150   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.116863   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.117500   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.118980   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:33.114436   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.115150   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.116863   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.117500   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.118980   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:33.122583   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:33.122597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:33.177421   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:33.177455   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:33.206989   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:33.207016   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:33.305130   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:33.305169   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:33.319318   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:33.319416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:35.847899   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:35.859028   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:35.859101   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:35.887722   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:35.887745   92925 cri.go:89] found id: ""
	I1213 19:14:35.887754   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:35.887807   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.891699   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:35.891771   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:35.920114   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:35.920138   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:35.920144   92925 cri.go:89] found id: ""
	I1213 19:14:35.920152   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:35.920222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.923937   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.927605   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:35.927678   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:35.953980   92925 cri.go:89] found id: ""
	I1213 19:14:35.954007   92925 logs.go:282] 0 containers: []
	W1213 19:14:35.954016   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:35.954023   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:35.954080   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:35.980645   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:35.980665   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:35.980670   92925 cri.go:89] found id: ""
	I1213 19:14:35.980678   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:35.980742   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.991946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.996641   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:35.996726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:36.026202   92925 cri.go:89] found id: ""
	I1213 19:14:36.026228   92925 logs.go:282] 0 containers: []
	W1213 19:14:36.026238   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:36.026245   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:36.026350   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:36.051979   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:36.052001   92925 cri.go:89] found id: ""
	I1213 19:14:36.052010   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:36.052066   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:36.055868   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:36.055938   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:36.083649   92925 cri.go:89] found id: ""
	I1213 19:14:36.083675   92925 logs.go:282] 0 containers: []
	W1213 19:14:36.083685   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:36.083693   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:36.083704   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:36.164414   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:36.164464   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:36.198766   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:36.198793   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:36.298985   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:36.299028   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:36.346466   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:36.346498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:36.376231   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:36.376258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:36.403571   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:36.403597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:36.417684   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:36.417714   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:36.487562   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:36.479494   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.480246   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.481848   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.482211   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.483808   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:36.479494   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.480246   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.481848   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.482211   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.483808   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:36.487585   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:36.487597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:36.514488   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:36.514514   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:36.559954   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:36.559990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:39.133526   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:39.150754   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:39.150826   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:39.179295   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:39.179315   92925 cri.go:89] found id: ""
	I1213 19:14:39.179324   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:39.179380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.185538   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:39.185605   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:39.216427   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:39.216449   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:39.216454   92925 cri.go:89] found id: ""
	I1213 19:14:39.216462   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:39.216517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.221041   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.225622   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:39.225691   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:39.251922   92925 cri.go:89] found id: ""
	I1213 19:14:39.251946   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.251955   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:39.251961   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:39.252019   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:39.281875   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:39.281900   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:39.281905   92925 cri.go:89] found id: ""
	I1213 19:14:39.281912   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:39.281970   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.286420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.290568   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:39.290663   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:39.315894   92925 cri.go:89] found id: ""
	I1213 19:14:39.315996   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.316021   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:39.316041   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:39.316153   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:39.344960   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:39.344983   92925 cri.go:89] found id: ""
	I1213 19:14:39.344992   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:39.345091   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.348776   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:39.348847   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:39.378840   92925 cri.go:89] found id: ""
	I1213 19:14:39.378862   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.378870   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:39.378879   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:39.378890   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:39.410058   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:39.410087   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:39.510110   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:39.510188   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:39.542821   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:39.542892   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:39.614365   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:39.605214   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.606127   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.607756   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.608303   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.610109   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:39.605214   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.606127   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.607756   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.608303   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.610109   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:39.614387   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:39.614403   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:39.656166   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:39.656199   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:39.700850   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:39.700887   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:39.735225   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:39.735267   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:39.765360   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:39.765396   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:39.856068   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:39.856115   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:39.883708   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:39.883738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.458661   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:42.469945   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:42.470018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:42.497805   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:42.497831   92925 cri.go:89] found id: ""
	I1213 19:14:42.497840   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:42.497898   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.502059   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:42.502128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:42.534485   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:42.534509   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:42.534514   92925 cri.go:89] found id: ""
	I1213 19:14:42.534521   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:42.534578   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.539929   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.544534   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:42.544618   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:42.572959   92925 cri.go:89] found id: ""
	I1213 19:14:42.572983   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.572991   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:42.572998   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:42.573085   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:42.605231   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.605253   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:42.605257   92925 cri.go:89] found id: ""
	I1213 19:14:42.605265   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:42.605324   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.609379   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.613098   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:42.613183   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:42.641856   92925 cri.go:89] found id: ""
	I1213 19:14:42.641881   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.641890   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:42.641897   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:42.641956   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:42.670835   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:42.670862   92925 cri.go:89] found id: ""
	I1213 19:14:42.670870   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:42.670923   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.674669   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:42.674780   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:42.701820   92925 cri.go:89] found id: ""
	I1213 19:14:42.701886   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.701912   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:42.701935   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:42.701974   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:42.795111   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:42.795148   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:42.843272   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:42.843308   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.918660   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:42.918701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:42.953437   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:42.953470   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:42.980705   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:42.980735   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:43.075228   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:43.075266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:43.089833   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:43.089865   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:43.165554   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:43.156189   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.157143   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.158950   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.160521   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.161743   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:43.156189   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.157143   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.158950   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.160521   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.161743   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:43.165619   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:43.165648   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:43.195772   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:43.195850   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:43.266745   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:43.266781   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:45.800090   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:45.811228   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:45.811319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:45.844476   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:45.844562   92925 cri.go:89] found id: ""
	I1213 19:14:45.844585   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:45.844658   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.848635   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:45.848730   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:45.878507   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:45.878532   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:45.878537   92925 cri.go:89] found id: ""
	I1213 19:14:45.878545   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:45.878626   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.883362   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.887015   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:45.887090   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:45.922472   92925 cri.go:89] found id: ""
	I1213 19:14:45.922495   92925 logs.go:282] 0 containers: []
	W1213 19:14:45.922504   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:45.922510   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:45.922571   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:45.961736   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:45.961766   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:45.961772   92925 cri.go:89] found id: ""
	I1213 19:14:45.961779   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:45.961846   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.965883   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.969985   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:45.970062   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:46.005121   92925 cri.go:89] found id: ""
	I1213 19:14:46.005143   92925 logs.go:282] 0 containers: []
	W1213 19:14:46.005153   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:46.005159   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:46.005218   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:46.033851   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:46.033871   92925 cri.go:89] found id: ""
	I1213 19:14:46.033878   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:46.033932   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:46.037737   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:46.037813   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:46.064426   92925 cri.go:89] found id: ""
	I1213 19:14:46.064493   92925 logs.go:282] 0 containers: []
	W1213 19:14:46.064517   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:46.064541   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:46.064580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:46.162246   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:46.162285   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:46.175470   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:46.175500   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:46.249273   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:46.239319   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.240280   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242150   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242816   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.244382   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:46.239319   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.240280   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242150   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242816   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.244382   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:46.249333   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:46.249347   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:46.277985   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:46.278016   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:46.332032   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:46.332065   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:46.376410   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:46.376446   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:46.455695   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:46.455772   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:46.485453   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:46.485479   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:46.522886   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:46.522916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:46.601217   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:46.601253   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:49.142956   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:49.157230   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:49.157309   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:49.185733   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:49.185767   92925 cri.go:89] found id: ""
	I1213 19:14:49.185775   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:49.185830   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.190180   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:49.190249   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:49.218248   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:49.218271   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:49.218276   92925 cri.go:89] found id: ""
	I1213 19:14:49.218285   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:49.218343   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.222331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.226027   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:49.226107   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:49.258473   92925 cri.go:89] found id: ""
	I1213 19:14:49.258496   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.258504   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:49.258512   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:49.258570   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:49.285496   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:49.285560   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:49.285578   92925 cri.go:89] found id: ""
	I1213 19:14:49.285601   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:49.285684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.291508   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.296197   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:49.296358   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:49.325094   92925 cri.go:89] found id: ""
	I1213 19:14:49.325119   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.325127   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:49.325134   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:49.325193   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:49.350750   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:49.350777   92925 cri.go:89] found id: ""
	I1213 19:14:49.350794   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:49.350857   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.354789   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:49.354915   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:49.381275   92925 cri.go:89] found id: ""
	I1213 19:14:49.381302   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.381311   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:49.381320   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:49.381331   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:49.473722   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:49.473760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:49.486016   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:49.486083   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:49.523030   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:49.523060   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:49.602664   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:49.602699   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:49.685307   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:49.685343   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:49.720678   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:49.720706   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:49.787762   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:49.779084   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.779733   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.781504   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.782055   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.783675   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:49.779084   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.779733   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.781504   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.782055   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.783675   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:49.787782   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:49.787795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:49.826153   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:49.826188   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:49.871719   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:49.871752   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:49.902768   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:49.902858   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:52.432900   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:52.443527   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:52.443639   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:52.470204   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:52.470237   92925 cri.go:89] found id: ""
	I1213 19:14:52.470247   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:52.470302   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.473971   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:52.474058   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:52.501963   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:52.501983   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:52.501987   92925 cri.go:89] found id: ""
	I1213 19:14:52.501994   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:52.502048   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.505744   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.509295   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:52.509368   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:52.534850   92925 cri.go:89] found id: ""
	I1213 19:14:52.534917   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.534943   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:52.534959   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:52.535033   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:52.570973   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:52.571045   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:52.571066   92925 cri.go:89] found id: ""
	I1213 19:14:52.571086   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:52.571156   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.574824   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.578317   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:52.578384   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:52.606849   92925 cri.go:89] found id: ""
	I1213 19:14:52.606873   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.606882   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:52.606888   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:52.606945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:52.633073   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:52.633095   92925 cri.go:89] found id: ""
	I1213 19:14:52.633103   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:52.633169   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.636819   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:52.636895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:52.663310   92925 cri.go:89] found id: ""
	I1213 19:14:52.663333   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.663342   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:52.663350   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:52.663363   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:52.732904   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:52.724948   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.725610   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727167   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727671   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.729366   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:52.724948   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.725610   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727167   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727671   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.729366   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:52.732929   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:52.732943   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:52.771098   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:52.771129   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:52.846025   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:52.846063   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:52.888075   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:52.888104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:52.992414   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:52.992452   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:53.007058   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:53.007089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:53.034812   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:53.034841   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:53.078790   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:53.078828   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:53.134673   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:53.134708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:53.162943   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:53.162969   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:55.740743   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:55.751731   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:55.751816   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:55.779888   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:55.779908   92925 cri.go:89] found id: ""
	I1213 19:14:55.779916   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:55.779976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.783761   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:55.783831   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:55.810156   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:55.810175   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:55.810185   92925 cri.go:89] found id: ""
	I1213 19:14:55.810192   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:55.810252   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.814013   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.817577   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:55.817649   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:55.843468   92925 cri.go:89] found id: ""
	I1213 19:14:55.843491   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.843499   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:55.843505   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:55.843561   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:55.870048   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:55.870081   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:55.870093   92925 cri.go:89] found id: ""
	I1213 19:14:55.870100   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:55.870158   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.874026   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.877764   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:55.877852   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:55.907873   92925 cri.go:89] found id: ""
	I1213 19:14:55.907900   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.907909   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:55.907915   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:55.907976   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:55.934710   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:55.934732   92925 cri.go:89] found id: ""
	I1213 19:14:55.934740   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:55.934795   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.938598   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:55.938671   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:55.968271   92925 cri.go:89] found id: ""
	I1213 19:14:55.968337   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.968361   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:55.968387   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:55.968416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:56.002213   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:56.002285   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:56.029658   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:56.029741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:56.125956   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:56.126039   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:56.139465   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:56.139492   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:56.191699   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:56.191735   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:56.278131   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:56.278179   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:56.314251   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:56.314283   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:56.383224   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:56.373948   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.374799   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.376672   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.377083   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.378823   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:56.373948   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.374799   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.376672   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.377083   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.378823   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:56.383248   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:56.383261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:56.410961   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:56.410990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:56.450595   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:56.450633   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.032642   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:59.043619   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:59.043712   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:59.070836   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:59.070859   92925 cri.go:89] found id: ""
	I1213 19:14:59.070867   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:59.070934   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.074933   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:59.075009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:59.112290   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:59.112313   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:59.112318   92925 cri.go:89] found id: ""
	I1213 19:14:59.112325   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:59.112380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.117374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.121073   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:59.121166   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:59.159645   92925 cri.go:89] found id: ""
	I1213 19:14:59.159714   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.159741   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:59.159763   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:59.159838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:59.193406   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.193430   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:59.193435   92925 cri.go:89] found id: ""
	I1213 19:14:59.193443   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:59.193524   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.197329   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.201001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:59.201109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:59.227682   92925 cri.go:89] found id: ""
	I1213 19:14:59.227706   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.227715   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:59.227721   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:59.227784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:59.254466   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:59.254497   92925 cri.go:89] found id: ""
	I1213 19:14:59.254505   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:59.254561   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.258458   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:59.258530   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:59.285792   92925 cri.go:89] found id: ""
	I1213 19:14:59.285817   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.285826   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:59.285835   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:59.285851   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:59.312955   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:59.312990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:59.394158   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:59.394195   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:59.439055   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:59.439084   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:59.452200   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:59.452253   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:59.543624   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:59.535183   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.536016   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.537681   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.538269   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.539987   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:59.535183   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.536016   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.537681   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.538269   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.539987   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:59.543645   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:59.543659   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:59.571506   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:59.571533   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:59.615595   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:59.615634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:59.717216   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:59.717256   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:59.764205   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:59.764243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.840500   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:59.840538   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.367252   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:02.379179   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:02.379252   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:02.407368   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:02.407394   92925 cri.go:89] found id: ""
	I1213 19:15:02.407402   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:02.407464   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.411245   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:02.411321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:02.439707   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:02.439727   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:02.439732   92925 cri.go:89] found id: ""
	I1213 19:15:02.439739   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:02.439793   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.443520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.447838   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:02.447965   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:02.475049   92925 cri.go:89] found id: ""
	I1213 19:15:02.475077   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.475086   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:02.475093   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:02.475153   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:02.509558   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:02.509582   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.509587   92925 cri.go:89] found id: ""
	I1213 19:15:02.509595   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:02.509652   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.513964   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.519816   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:02.519888   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:02.549572   92925 cri.go:89] found id: ""
	I1213 19:15:02.549639   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.549653   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:02.549660   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:02.549720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:02.578189   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:02.578215   92925 cri.go:89] found id: ""
	I1213 19:15:02.578224   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:02.578287   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.582094   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:02.582166   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:02.609748   92925 cri.go:89] found id: ""
	I1213 19:15:02.609774   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.609783   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:02.609792   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:02.609823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:02.660274   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:02.660313   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:02.737557   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:02.737590   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:02.821155   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:02.821193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:02.853468   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:02.853501   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:02.866631   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:02.866661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:02.895294   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:02.895323   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:02.940697   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:02.940734   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.970055   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:02.970088   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:03.002379   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:03.002409   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:03.096355   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:03.096390   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:03.189863   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:03.181408   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.182165   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.183899   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.184754   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.186389   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:03.181408   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.182165   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.183899   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.184754   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.186389   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:05.690514   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:05.702677   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:05.702772   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:05.730136   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:05.730160   92925 cri.go:89] found id: ""
	I1213 19:15:05.730169   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:05.730226   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.733966   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:05.734047   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:05.761337   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:05.761404   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:05.761425   92925 cri.go:89] found id: ""
	I1213 19:15:05.761450   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:05.761534   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.766511   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.770470   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:05.770545   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:05.803220   92925 cri.go:89] found id: ""
	I1213 19:15:05.803284   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.803300   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:05.803306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:05.803383   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:05.831772   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:05.831797   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:05.831803   92925 cri.go:89] found id: ""
	I1213 19:15:05.831810   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:05.831869   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.835814   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.839281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:05.839351   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:05.870011   92925 cri.go:89] found id: ""
	I1213 19:15:05.870038   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.870059   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:05.870065   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:05.870126   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:05.898850   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:05.898877   92925 cri.go:89] found id: ""
	I1213 19:15:05.898888   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:05.898943   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.903063   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:05.903177   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:05.930061   92925 cri.go:89] found id: ""
	I1213 19:15:05.930126   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.930140   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:05.930150   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:05.930164   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:05.943518   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:05.943549   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:05.973699   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:05.973729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:06.024591   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:06.024622   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:06.131997   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:06.132041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:06.202110   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:06.193932   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.195174   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.196901   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.197593   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.198598   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:06.193932   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.195174   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.196901   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.197593   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.198598   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:06.202133   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:06.202145   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:06.241491   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:06.241525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:06.289002   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:06.289076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:06.376385   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:06.376422   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:06.406893   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:06.406920   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:06.438586   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:06.438615   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:09.021141   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:09.032497   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:09.032597   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:09.061840   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:09.061871   92925 cri.go:89] found id: ""
	I1213 19:15:09.061881   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:09.061939   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.065632   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:09.065706   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:09.094419   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:09.094444   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:09.094449   92925 cri.go:89] found id: ""
	I1213 19:15:09.094456   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:09.094517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.098305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.108354   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:09.108432   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:09.137672   92925 cri.go:89] found id: ""
	I1213 19:15:09.137706   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.137716   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:09.137722   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:09.137785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:09.170831   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:09.170854   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:09.170859   92925 cri.go:89] found id: ""
	I1213 19:15:09.170866   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:09.170929   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.174672   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.177949   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:09.178023   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:09.208255   92925 cri.go:89] found id: ""
	I1213 19:15:09.208282   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.208291   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:09.208297   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:09.208352   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:09.234350   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:09.234373   92925 cri.go:89] found id: ""
	I1213 19:15:09.234381   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:09.234453   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.238030   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:09.238102   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:09.264310   92925 cri.go:89] found id: ""
	I1213 19:15:09.264335   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.264344   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:09.264352   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:09.264365   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:09.295245   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:09.295276   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:09.369835   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:09.369869   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:09.472350   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:09.472384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:09.500555   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:09.500589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:09.535996   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:09.536032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:09.552067   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:09.552096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:09.624766   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:09.616285   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.617238   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.618950   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.619348   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.620912   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:09.616285   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.617238   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.618950   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.619348   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.620912   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:09.624810   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:09.624823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:09.654769   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:09.654796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:09.695636   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:09.695711   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:09.740840   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:09.740873   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.330150   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:12.341327   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:12.341430   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:12.373666   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:12.373692   92925 cri.go:89] found id: ""
	I1213 19:15:12.373699   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:12.373760   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.377493   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:12.377563   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:12.407860   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:12.407882   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:12.407886   92925 cri.go:89] found id: ""
	I1213 19:15:12.407897   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:12.407965   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.411939   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.416613   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:12.416687   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:12.447044   92925 cri.go:89] found id: ""
	I1213 19:15:12.447071   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.447080   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:12.447086   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:12.447149   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:12.474565   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.474599   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:12.474604   92925 cri.go:89] found id: ""
	I1213 19:15:12.474612   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:12.474669   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.478501   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.482327   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:12.482425   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:12.519207   92925 cri.go:89] found id: ""
	I1213 19:15:12.519235   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.519245   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:12.519252   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:12.519330   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:12.548236   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:12.548259   92925 cri.go:89] found id: ""
	I1213 19:15:12.548269   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:12.548334   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.552167   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:12.552292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:12.581061   92925 cri.go:89] found id: ""
	I1213 19:15:12.581086   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.581094   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:12.581103   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:12.581115   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:12.626762   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:12.626795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:12.676771   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:12.676803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:12.708623   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:12.708661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:12.735332   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:12.735361   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:12.830566   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:12.830606   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:12.858035   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:12.858107   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.953406   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:12.953445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:13.037585   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:13.037626   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:13.070076   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:13.070108   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:13.083239   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:13.083266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:13.171369   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:13.163050   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.163831   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.165471   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.166105   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.167624   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:13.163050   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.163831   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.165471   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.166105   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.167624   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:15.672265   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:15.683518   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:15.683589   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:15.713736   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:15.713764   92925 cri.go:89] found id: ""
	I1213 19:15:15.713773   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:15.713845   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.718041   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:15.718116   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:15.745439   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:15.745462   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:15.745467   92925 cri.go:89] found id: ""
	I1213 19:15:15.745475   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:15.745555   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.749679   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.753271   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:15.753343   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:15.780766   92925 cri.go:89] found id: ""
	I1213 19:15:15.780791   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.780800   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:15.780806   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:15.780867   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:15.809433   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:15.809453   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:15.809458   92925 cri.go:89] found id: ""
	I1213 19:15:15.809466   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:15.809521   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.813350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.816829   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:15.816899   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:15.843466   92925 cri.go:89] found id: ""
	I1213 19:15:15.843491   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.843501   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:15.843507   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:15.843566   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:15.869979   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:15.870003   92925 cri.go:89] found id: ""
	I1213 19:15:15.870012   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:15.870069   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.873941   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:15.874036   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:15.906204   92925 cri.go:89] found id: ""
	I1213 19:15:15.906268   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.906283   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:15.906293   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:15.906305   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:16.002221   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:16.002261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:16.030993   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:16.031024   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:16.078933   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:16.078967   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:16.173955   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:16.174010   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:16.207960   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:16.207989   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:16.221095   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:16.221124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:16.290865   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:16.280288   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.281366   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.282142   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.283740   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.284314   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:16.280288   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.281366   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.282142   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.283740   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.284314   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:16.290940   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:16.290969   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:16.330431   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:16.330462   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:16.403747   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:16.403785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:16.435000   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:16.435076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:18.967118   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:18.978473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:18.978548   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:19.009416   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:19.009442   92925 cri.go:89] found id: ""
	I1213 19:15:19.009450   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:19.009506   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.013229   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:19.013304   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:19.046195   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:19.046217   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:19.046221   92925 cri.go:89] found id: ""
	I1213 19:15:19.046228   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:19.046284   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.050380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.055287   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:19.055364   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:19.084697   92925 cri.go:89] found id: ""
	I1213 19:15:19.084724   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.084734   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:19.084740   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:19.084799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:19.134188   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:19.134212   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:19.134217   92925 cri.go:89] found id: ""
	I1213 19:15:19.134225   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:19.134281   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.139452   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.143380   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:19.143515   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:19.176707   92925 cri.go:89] found id: ""
	I1213 19:15:19.176733   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.176742   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:19.176748   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:19.176808   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:19.205658   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:19.205681   92925 cri.go:89] found id: ""
	I1213 19:15:19.205689   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:19.205769   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.209480   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:19.209556   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:19.236187   92925 cri.go:89] found id: ""
	I1213 19:15:19.236210   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.236219   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:19.236227   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:19.236239   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:19.335347   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:19.335384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:19.347594   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:19.347622   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:19.423749   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:19.415662   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.416536   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418222   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418572   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.420106   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:19.415662   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.416536   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418222   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418572   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.420106   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:19.423773   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:19.423785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:19.458293   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:19.458322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:19.491891   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:19.491981   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:19.532203   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:19.532289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:19.572383   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:19.572416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:19.623843   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:19.623878   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:19.701590   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:19.701669   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:19.730646   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:19.730674   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:22.313136   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:22.324070   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:22.324192   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:22.354911   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:22.354936   92925 cri.go:89] found id: ""
	I1213 19:15:22.354944   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:22.355017   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.359138   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:22.359232   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:22.387533   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:22.387553   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:22.387559   92925 cri.go:89] found id: ""
	I1213 19:15:22.387567   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:22.387622   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.391451   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.395283   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:22.395396   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:22.424307   92925 cri.go:89] found id: ""
	I1213 19:15:22.424330   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.424338   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:22.424345   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:22.424406   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:22.453085   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:22.453146   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:22.453167   92925 cri.go:89] found id: ""
	I1213 19:15:22.453192   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:22.453265   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.457420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.461164   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:22.461238   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:22.491907   92925 cri.go:89] found id: ""
	I1213 19:15:22.491930   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.491939   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:22.491944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:22.492029   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:22.527521   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:22.527588   92925 cri.go:89] found id: ""
	I1213 19:15:22.527615   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:22.527710   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.531946   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:22.532027   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:22.559453   92925 cri.go:89] found id: ""
	I1213 19:15:22.559480   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.559499   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:22.559510   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:22.559522   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:22.601772   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:22.601808   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:22.649158   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:22.649193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:22.676639   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:22.676667   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:22.777850   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:22.777888   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:22.851444   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:22.842501   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.843358   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.845491   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.846536   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.847439   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:22.842501   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.843358   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.845491   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.846536   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.847439   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:22.851468   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:22.851480   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:22.933320   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:22.933358   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:22.962559   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:22.962589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:23.059725   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:23.059803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:23.109255   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:23.109286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:23.122814   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:23.122844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:25.651780   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:25.662957   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:25.663032   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:25.696971   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:25.696993   92925 cri.go:89] found id: ""
	I1213 19:15:25.697001   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:25.697087   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.701838   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:25.701919   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:25.738295   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:25.738373   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:25.738386   92925 cri.go:89] found id: ""
	I1213 19:15:25.738395   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:25.738459   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.742364   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.746297   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:25.746400   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:25.772105   92925 cri.go:89] found id: ""
	I1213 19:15:25.772178   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.772201   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:25.772221   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:25.772305   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:25.799458   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:25.799526   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:25.799546   92925 cri.go:89] found id: ""
	I1213 19:15:25.799570   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:25.799645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.803647   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.807583   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:25.807695   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:25.834975   92925 cri.go:89] found id: ""
	I1213 19:15:25.835051   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.835066   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:25.835073   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:25.835133   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:25.864722   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:25.864769   92925 cri.go:89] found id: ""
	I1213 19:15:25.864778   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:25.864836   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.868764   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:25.868838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:25.897111   92925 cri.go:89] found id: ""
	I1213 19:15:25.897133   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.897141   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:25.897162   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:25.897174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:26.007072   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:26.007104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:26.025166   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:26.025201   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:26.111354   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:26.097401   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.097781   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105030   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105458   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.107065   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:26.097401   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.097781   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105030   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105458   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.107065   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:26.111374   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:26.111387   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:26.141476   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:26.141507   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:26.169374   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:26.169404   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:26.246093   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:26.246133   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:26.297802   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:26.297829   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:26.325154   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:26.325182   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:26.368489   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:26.368524   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:26.414072   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:26.414110   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.001164   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:29.013204   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:29.013272   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:29.047888   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:29.047909   92925 cri.go:89] found id: ""
	I1213 19:15:29.047918   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:29.047982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.051890   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:29.051971   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:29.077464   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:29.077486   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:29.077490   92925 cri.go:89] found id: ""
	I1213 19:15:29.077498   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:29.077553   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.081462   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.084988   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:29.085157   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:29.115595   92925 cri.go:89] found id: ""
	I1213 19:15:29.115621   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.115631   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:29.115637   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:29.115697   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:29.160656   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.160729   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:29.160748   92925 cri.go:89] found id: ""
	I1213 19:15:29.160772   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:29.160853   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.165160   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.168775   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:29.168891   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:29.199867   92925 cri.go:89] found id: ""
	I1213 19:15:29.199890   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.199899   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:29.199911   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:29.200009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:29.226478   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:29.226502   92925 cri.go:89] found id: ""
	I1213 19:15:29.226511   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:29.226565   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.230306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:29.230382   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:29.260973   92925 cri.go:89] found id: ""
	I1213 19:15:29.260999   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.261034   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:29.261044   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:29.261060   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:29.288533   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:29.288560   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:29.317072   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:29.317145   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:29.343899   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:29.343926   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:29.424466   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:29.424502   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:29.437265   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:29.437314   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:29.525751   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:29.505457   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.506350   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.518441   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.520261   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.521214   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:29.505457   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.506350   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.518441   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.520261   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.521214   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:29.525774   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:29.525787   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:29.565912   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:29.565947   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:29.614921   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:29.614962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.695191   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:29.695229   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:29.726876   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:29.726907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:32.331342   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:32.342123   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:32.342193   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:32.377492   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:32.377512   92925 cri.go:89] found id: ""
	I1213 19:15:32.377520   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:32.377603   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.381461   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:32.381535   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:32.408828   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:32.408849   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:32.408853   92925 cri.go:89] found id: ""
	I1213 19:15:32.408861   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:32.408913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.412666   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.416683   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:32.416757   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:32.444710   92925 cri.go:89] found id: ""
	I1213 19:15:32.444734   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.444744   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:32.444750   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:32.444842   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:32.470813   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:32.470834   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:32.470839   92925 cri.go:89] found id: ""
	I1213 19:15:32.470846   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:32.470904   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.474746   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.478110   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:32.478180   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:32.505590   92925 cri.go:89] found id: ""
	I1213 19:15:32.505616   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.505625   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:32.505630   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:32.505685   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:32.534851   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:32.534873   92925 cri.go:89] found id: ""
	I1213 19:15:32.534882   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:32.534942   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.538913   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:32.539005   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:32.570980   92925 cri.go:89] found id: ""
	I1213 19:15:32.571020   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.571029   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:32.571055   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:32.571075   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:32.672697   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:32.672739   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:32.685325   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:32.685360   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:32.762805   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:32.754695   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.755445   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.756898   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.757344   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.759247   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:32.754695   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.755445   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.756898   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.757344   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.759247   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:32.762877   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:32.762899   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:32.788216   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:32.788243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:32.831764   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:32.831797   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:32.861451   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:32.861481   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:32.889040   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:32.889113   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:32.962682   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:32.962721   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:33.005926   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:33.005963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:33.113066   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:33.113100   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:35.646466   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:35.657328   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:35.657400   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:35.682772   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:35.682796   92925 cri.go:89] found id: ""
	I1213 19:15:35.682805   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:35.682862   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.686943   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:35.687017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:35.713394   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:35.713426   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:35.713433   92925 cri.go:89] found id: ""
	I1213 19:15:35.713440   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:35.713492   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.717236   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.720957   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:35.721060   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:35.747062   92925 cri.go:89] found id: ""
	I1213 19:15:35.747139   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.747155   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:35.747162   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:35.747223   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:35.780788   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:35.780809   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:35.780814   92925 cri.go:89] found id: ""
	I1213 19:15:35.780822   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:35.780877   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.784913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.788950   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:35.789084   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:35.817183   92925 cri.go:89] found id: ""
	I1213 19:15:35.817206   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.817217   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:35.817223   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:35.817285   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:35.844649   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:35.844674   92925 cri.go:89] found id: ""
	I1213 19:15:35.844682   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:35.844741   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.848694   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:35.848772   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:35.880264   92925 cri.go:89] found id: ""
	I1213 19:15:35.880293   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.880302   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:35.880311   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:35.880323   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:35.928133   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:35.928168   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:36.005056   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:36.005095   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:36.088199   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:36.088234   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:36.195615   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:36.195657   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:36.222570   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:36.222597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:36.253158   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:36.253189   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:36.282294   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:36.282324   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:36.315027   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:36.315057   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:36.327415   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:36.327445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:36.397770   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:36.388485   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.389249   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.391121   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392189   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392759   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:36.388485   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.389249   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.391121   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392189   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392759   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:36.397793   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:36.397809   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:38.950291   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:38.966129   92925 out.go:203] 
	W1213 19:15:38.969186   92925 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 19:15:38.969230   92925 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 19:15:38.969244   92925 out.go:285] * Related issues:
	W1213 19:15:38.969256   92925 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 19:15:38.969271   92925 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 19:15:38.972406   92925 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.008646414Z" level=info msg="Started container" PID=1413 containerID=162b495909eae3cb5f079d5fd260e61e560cd11212e69ad52138f4180f770a5b description=kube-system/storage-provisioner/storage-provisioner id=78f061d7-6d54-48f8-b513-d5c320e8e810 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b4d0206cec1a1b4c0b5752a4babdaf8710471f5502067896b44e2d2df0c4d5b
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.011070102Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=d15204a7-37cc-4d8c-a231-166dcd68a520 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.012539045Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=6b3690d3-7f7d-43f9-95f1-1cd8e6e953ff name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.02550851Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-85rpk/coredns" id=ac3e351b-9839-445c-b06c-72f089234671 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.025812066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.048513937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.049307526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.073222358Z" level=info msg="Created container 98620d4f3c674bb9bab6e41c90c32e2b069e67c18730baafb91af41ae8c19bcf: default/busybox-7b57f96db7-h5qqv/busybox" id=3c28fa9a-be33-4fec-ad16-52c4765c6b6f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.082412808Z" level=info msg="Starting container: 98620d4f3c674bb9bab6e41c90c32e2b069e67c18730baafb91af41ae8c19bcf" id=7ee27ecf-6fea-48b9-9feb-9cb5f5270b26 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.109207129Z" level=info msg="Started container" PID=1422 containerID=98620d4f3c674bb9bab6e41c90c32e2b069e67c18730baafb91af41ae8c19bcf description=default/busybox-7b57f96db7-h5qqv/busybox id=7ee27ecf-6fea-48b9-9feb-9cb5f5270b26 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3641321fd538fed941abd3cee5bdec42be3fbe581a0a743eea30ee6edf2692ee
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.121281524Z" level=info msg="Created container 511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505: kube-system/coredns-66bc5c9577-85rpk/coredns" id=ac3e351b-9839-445c-b06c-72f089234671 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.122743263Z" level=info msg="Starting container: 511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505" id=4e4e597f-bb09-435f-a3da-58627ddb7595 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.124507425Z" level=info msg="Started container" PID=1433 containerID=511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505 description=kube-system/coredns-66bc5c9577-85rpk/coredns id=4e4e597f-bb09-435f-a3da-58627ddb7595 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.122399466Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.129604955Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.129827191Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.129946091Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.139648811Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.139699543Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.139727531Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.147861576Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.148118551Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.148270222Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.153836563Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.154024681Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	511836b213244       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   2                   1d4641fc3fdac       coredns-66bc5c9577-85rpk            kube-system
	98620d4f3c674       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   6 minutes ago       Running             busybox                   2                   3641321fd538f       busybox-7b57f96db7-h5qqv            default
	162b495909eae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Running             storage-provisioner       4                   3b4d0206cec1a       storage-provisioner                 kube-system
	167e9e0789f86       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   6 minutes ago       Running             kube-controller-manager   7                   c35b44e70d6d7       kube-controller-manager-ha-605114   kube-system
	7bc9cb09a081e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   6 minutes ago       Exited              kube-controller-manager   6                   c35b44e70d6d7       kube-controller-manager-ha-605114   kube-system
	76f4d2ef7a334       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Running             kube-vip                  3                   6e0df90fd1fab       kube-vip-ha-605114                  kube-system
	7db7b17ab2144       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   d895cdca857a1       coredns-66bc5c9577-rc9qg            kube-system
	adb6a0d2cd304       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   7 minutes ago       Running             kube-proxy                2                   511ce74a57340       kube-proxy-c6t4v                    kube-system
	f1a416886d288       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               2                   e61041a4c5e3e       kindnet-dtnb7                       kube-system
	9a81ddd488bb7       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   7 minutes ago       Running             etcd                      2                   a40bba21dff67       etcd-ha-605114                      kube-system
	ee202abc8dba3       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   7 minutes ago       Running             kube-scheduler            2                   5a646569f389f       kube-scheduler-ha-605114            kube-system
	3c729bb1538bf       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   7 minutes ago       Running             kube-apiserver            2                   390331a7238b2       kube-apiserver-ha-605114            kube-system
	2b3744a5aa7a9       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Exited              kube-vip                  2                   6e0df90fd1fab       kube-vip-ha-605114                  kube-system
	
	
	==> coredns [511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60720 - 44913 "HINFO IN 3829035828325911617.4912160736216291985. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012907336s
	
	
	==> coredns [7db7b17ab2144a863bb29b6e2f750b6eb865e786cf824a74c0b415ac4077800a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58025 - 60628 "HINFO IN 3868133962360849883.307927823530690311. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.054923758s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-605114
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T18_59_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 18:59:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:15:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 18:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 18:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 18:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 19:00:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-605114
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                8ff9857c-e2f0-4d86-9970-2f9e1bad48df
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-h5qqv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-85rpk             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 coredns-66bc5c9577-rc9qg             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-ha-605114                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-dtnb7                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-605114             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-605114    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-c6t4v                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-605114             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-605114                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m42s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 9m53s                  kube-proxy       
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)      kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-605114 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m51s                  node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           9m37s                  node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           9m17s                  node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   Starting                 7m54s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m54s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m54s (x8 over 7m54s)  kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m54s (x8 over 7m54s)  kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m54s (x8 over 7m54s)  kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m5s                   node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	
	
	Name:               ha-605114-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_13T19_00_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 19:00:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:07:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-605114-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                c9a90528-cc46-44be-a006-2245d1e8d275
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-gqp98                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-605114-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-hxgh6                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-605114-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-605114-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-87qlc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-605114-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-605114-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 9m38s              kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   RegisteredNode           15m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-605114-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeNotReady             11m                node-controller  Node ha-605114-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           11m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-605114-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m51s              node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           9m37s              node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           9m17s              node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           6m5s               node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   NodeNotReady             5m14s              node-controller  Node ha-605114-m02 status is now: NodeNotReady
	
	
	Name:               ha-605114-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_13T19_02_38_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 19:02:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:07:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-605114-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                1710ae92-5ee6-4178-a2ff-b2523f5ef2e1
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wl925    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 kindnet-9xnpk               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-proxy-lqp4f            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m51s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientPID     13m (x3 over 13m)      kubelet          Node ha-605114-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m (x3 over 13m)      kubelet          Node ha-605114-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x3 over 13m)      kubelet          Node ha-605114-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-605114-m04 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           9m51s                  node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           9m37s                  node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   Starting                 9m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m19s (x8 over 9m22s)  kubelet          Node ha-605114-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m19s (x8 over 9m22s)  kubelet          Node ha-605114-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m19s (x8 over 9m22s)  kubelet          Node ha-605114-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m17s                  node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           6m5s                   node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   NodeNotReady             5m14s                  node-controller  Node ha-605114-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	[Dec13 18:42] overlayfs: idmapped layers are currently not supported
	[Dec13 18:59] overlayfs: idmapped layers are currently not supported
	[ +33.753607] overlayfs: idmapped layers are currently not supported
	[Dec13 19:01] overlayfs: idmapped layers are currently not supported
	[Dec13 19:02] overlayfs: idmapped layers are currently not supported
	[Dec13 19:03] overlayfs: idmapped layers are currently not supported
	[Dec13 19:05] overlayfs: idmapped layers are currently not supported
	[  +4.041925] overlayfs: idmapped layers are currently not supported
	[ +36.958854] overlayfs: idmapped layers are currently not supported
	[Dec13 19:06] overlayfs: idmapped layers are currently not supported
	[Dec13 19:07] overlayfs: idmapped layers are currently not supported
	[  +4.088622] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9a81ddd488bb7e9ca9d20cc8af4e9414463f3bf2bd40edd26c2e9395f731a3ec] <==
	{"level":"info","ts":"2025-12-13T19:09:39.431175Z","caller":"traceutil/trace.go:172","msg":"trace[1650970072] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:2626; }","duration":"129.103507ms","start":"2025-12-13T19:09:39.302064Z","end":"2025-12-13T19:09:39.431167Z","steps":["trace[1650970072] 'agreement among raft nodes before linearized reading'  (duration: 128.051769ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430187Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.351493ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:10000 ","response":"range_response_count:2 size:1908"}
	{"level":"info","ts":"2025-12-13T19:09:39.431486Z","caller":"traceutil/trace.go:172","msg":"trace[706769155] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:2; response_revision:2626; }","duration":"129.64282ms","start":"2025-12-13T19:09:39.301832Z","end":"2025-12-13T19:09:39.431475Z","steps":["trace[706769155] 'agreement among raft nodes before linearized reading'  (duration: 128.305668ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430250Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.518032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 ","response":"range_response_count:12 size:7370"}
	{"level":"info","ts":"2025-12-13T19:09:39.431783Z","caller":"traceutil/trace.go:172","msg":"trace[1208935311] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:12; response_revision:2626; }","duration":"130.043599ms","start":"2025-12-13T19:09:39.301728Z","end":"2025-12-13T19:09:39.431772Z","steps":["trace[1208935311] 'agreement among raft nodes before linearized reading'  (duration: 128.468162ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430267Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.574975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.432082Z","caller":"traceutil/trace.go:172","msg":"trace[1994846449] range","detail":"{range_begin:/registry/csidrivers; range_end:; response_count:0; response_revision:2626; }","duration":"130.383461ms","start":"2025-12-13T19:09:39.301689Z","end":"2025-12-13T19:09:39.432073Z","steps":["trace[1994846449] 'agreement among raft nodes before linearized reading'  (duration: 128.568222ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430286Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.658701ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.432459Z","caller":"traceutil/trace.go:172","msg":"trace[1654927610] range","detail":"{range_begin:/registry/horizontalpodautoscalers; range_end:; response_count:0; response_revision:2626; }","duration":"130.828203ms","start":"2025-12-13T19:09:39.301621Z","end":"2025-12-13T19:09:39.432449Z","steps":["trace[1654927610] 'agreement among raft nodes before linearized reading'  (duration: 128.652579ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430302Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.705978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.432811Z","caller":"traceutil/trace.go:172","msg":"trace[81323615] range","detail":"{range_begin:/registry/configmaps; range_end:; response_count:0; response_revision:2626; }","duration":"131.208952ms","start":"2025-12-13T19:09:39.301593Z","end":"2025-12-13T19:09:39.432802Z","steps":["trace[81323615] 'agreement among raft nodes before linearized reading'  (duration: 128.698922ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430337Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.394351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:kube-controller-manager\" limit:1 ","response":"range_response_count:1 size:1041"}
	{"level":"info","ts":"2025-12-13T19:09:39.433834Z","caller":"traceutil/trace.go:172","msg":"trace[344691668] range","detail":"{range_begin:/registry/clusterroles/system:kube-controller-manager; range_end:; response_count:1; response_revision:2626; }","duration":"135.882151ms","start":"2025-12-13T19:09:39.297939Z","end":"2025-12-13T19:09:39.433821Z","steps":["trace[344691668] 'agreement among raft nodes before linearized reading'  (duration: 132.36844ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430429Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.860031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" limit:10000 ","response":"range_response_count:11 size:18815"}
	{"level":"info","ts":"2025-12-13T19:09:39.434335Z","caller":"traceutil/trace.go:172","msg":"trace[1944125204] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:11; response_revision:2626; }","duration":"136.761335ms","start":"2025-12-13T19:09:39.297564Z","end":"2025-12-13T19:09:39.434326Z","steps":["trace[1944125204] 'agreement among raft nodes before linearized reading'  (duration: 132.783495ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430483Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.832462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" limit:10000 ","response":"range_response_count:4 size:9425"}
	{"level":"info","ts":"2025-12-13T19:09:39.434702Z","caller":"traceutil/trace.go:172","msg":"trace[1630690192] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:4; response_revision:2626; }","duration":"137.0456ms","start":"2025-12-13T19:09:39.297647Z","end":"2025-12-13T19:09:39.434692Z","steps":["trace[1630690192] 'agreement among raft nodes before linearized reading'  (duration: 132.792011ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430503Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.881808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattributesclasses/\" range_end:\"/registry/volumeattributesclasses0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.435067Z","caller":"traceutil/trace.go:172","msg":"trace[1656563266] range","detail":"{range_begin:/registry/volumeattributesclasses/; range_end:/registry/volumeattributesclasses0; response_count:0; response_revision:2626; }","duration":"137.439856ms","start":"2025-12-13T19:09:39.297617Z","end":"2025-12-13T19:09:39.435057Z","steps":["trace[1656563266] 'agreement among raft nodes before linearized reading'  (duration: 132.874046ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430523Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.92591ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.435401Z","caller":"traceutil/trace.go:172","msg":"trace[1716858309] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:2626; }","duration":"137.801578ms","start":"2025-12-13T19:09:39.297590Z","end":"2025-12-13T19:09:39.435392Z","steps":["trace[1716858309] 'agreement among raft nodes before linearized reading'  (duration: 132.919109ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430545Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.039313ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.435723Z","caller":"traceutil/trace.go:172","msg":"trace[380978863] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; response_count:0; response_revision:2626; }","duration":"138.19644ms","start":"2025-12-13T19:09:39.297502Z","end":"2025-12-13T19:09:39.435698Z","steps":["trace[380978863] 'agreement among raft nodes before linearized reading'  (duration: 133.03ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T19:09:39.430563Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.034177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T19:09:39.436008Z","caller":"traceutil/trace.go:172","msg":"trace[236711872] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:2626; }","duration":"138.472451ms","start":"2025-12-13T19:09:39.297525Z","end":"2025-12-13T19:09:39.435998Z","steps":["trace[236711872] 'agreement among raft nodes before linearized reading'  (duration: 133.025848ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:15:48 up  1:58,  0 user,  load average: 0.38, 1.21, 1.35
	Linux ha-605114 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f1a416886d288f33359cd21dacc737dbed6a3c975d9323a89f8c93828c040431] <==
	I1213 19:15:05.129647       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:15:15.121671       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:15:15.121778       1 main.go:301] handling current node
	I1213 19:15:15.121805       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:15:15.121812       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:15:15.121970       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:15:15.121984       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:15:25.129931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:15:25.129981       1 main.go:301] handling current node
	I1213 19:15:25.130000       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:15:25.130008       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:15:25.130327       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:15:25.130433       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:15:35.121949       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:15:35.122126       1 main.go:301] handling current node
	I1213 19:15:35.122926       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:15:35.123027       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:15:35.123298       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:15:35.123379       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:15:45.121169       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:15:45.121207       1 main.go:301] handling current node
	I1213 19:15:45.121226       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:15:45.121233       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:15:45.121394       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:15:45.121401       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3c729bb1538bfb45bc9b5542f5524916c96b118344d2be8a42e58a0bc6d4cb0d] <==
	{"level":"warn","ts":"2025-12-13T19:09:39.225607Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012ff680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225637Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014ec3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225654Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029a8780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225669Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fc780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225684Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fd2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231292Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fc1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231412Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019832c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231467Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001982000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231521Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400103ad20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231578Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019b2000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231633Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001f0bc20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231700Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231767Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231831Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028461e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231883Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028461e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231933Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028461e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231988Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001bfa5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.232044Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001bfa5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	W1213 19:09:41.980970       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1213 19:09:41.982698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 19:09:41.995308       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 19:09:44.281972       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 19:09:52.543985       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 19:10:34.144307       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 19:10:34.189645       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [167e9e0789f864655d959c63fd731257c88aa1e1b22515ec35f4a07af4678202] <==
	E1213 19:10:03.979335       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:03.979363       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:03.979375       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:03.979382       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979733       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979852       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979884       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979949       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979979       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	I1213 19:10:24.001195       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-605114-m03"
	I1213 19:10:24.044627       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-605114-m03"
	I1213 19:10:24.044809       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-605114-m03"
	I1213 19:10:24.081792       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-605114-m03"
	I1213 19:10:24.081903       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-605114-m03"
	I1213 19:10:24.149160       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-605114-m03"
	I1213 19:10:24.149272       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-605114-m03"
	I1213 19:10:24.187394       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-605114-m03"
	I1213 19:10:24.187500       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4kfpv"
	I1213 19:10:24.241495       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4kfpv"
	I1213 19:10:24.241622       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5m48f"
	I1213 19:10:24.284484       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5m48f"
	I1213 19:10:24.284851       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-605114-m03"
	I1213 19:10:24.328812       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-605114-m03"
	I1213 19:15:34.087612       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-wl925"
	I1213 19:15:44.076408       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-gqp98"
	
	
	==> kube-controller-manager [7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773] <==
	I1213 19:08:49.567762       1 serving.go:386] Generated self-signed cert in-memory
	I1213 19:08:50.364508       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1213 19:08:50.364608       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:08:50.366449       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 19:08:50.366623       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 19:08:50.366938       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1213 19:08:50.366991       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 19:09:04.386470       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [adb6a0d2cd30435f1f392f09033a5ad40b3f1d3a5a2f1fe0d2ae76a50bf8f3b4] <==
	I1213 19:08:50.244883       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	E1213 19:08:50.246471       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": http2: client connection lost"
	E1213 19:08:54.165411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:08:54.165542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:08:54.165634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:08:54.165741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:08:57.237395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:08:57.237414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:08:57.237660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:08:57.237667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:09:03.989710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:09:03.989962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:09:03.990083       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1213 19:09:03.990245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:09:03.990394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:09:15.029488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:09:15.029488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:09:15.029671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:09:15.029765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:09:18.101424       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1213 19:09:31.797443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:09:31.797538       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1213 19:09:31.797646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:09:34.869405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:09:42.229400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	
	
	==> kube-scheduler [ee202abc8dba3b97ac56d7c3063ce4fae0734134ba47b9d6070588c897f7baf0] <==
	E1213 19:08:02.527700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 19:08:02.527776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 19:08:02.527848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 19:08:02.527900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 19:08:02.527911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 19:08:02.527950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 19:08:02.528002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 19:08:02.528106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 19:08:02.528181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 19:08:02.528340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 19:08:02.528402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 19:08:03.355200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 19:08:03.375752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 19:08:03.384341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 19:08:03.496281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 19:08:03.527514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:08:03.564170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 19:08:03.604860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 19:08:03.609546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 19:08:03.663151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 19:08:03.683755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 19:08:03.838837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 19:08:03.901316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 19:08:03.901563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1213 19:08:06.412915       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 19:09:04 ha-605114 kubelet[806]: I1213 19:09:04.239034     806 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Dec 13 19:09:04 ha-605114 kubelet[806]: E1213 19:09:04.524602     806 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods coredns-66bc5c9577-rc9qg)" podUID="0f2b52ea-d2f2-4307-8a52-619a737c2611" pod="kube-system/coredns-66bc5c9577-rc9qg"
	Dec 13 19:09:04 ha-605114 kubelet[806]: I1213 19:09:04.666266     806 scope.go:117] "RemoveContainer" containerID="38e10b9deae562bcc475d6b257111633953b93aa5e59b05a1a5aaca29705804b"
	Dec 13 19:09:04 ha-605114 kubelet[806]: I1213 19:09:04.666833     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:04 ha-605114 kubelet[806]: E1213 19:09:04.667006     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-605114_kube-system(6b36430ebbfe01869fc54848b2e1c2a9)\"" pod="kube-system/kube-controller-manager-ha-605114" podUID="6b36430ebbfe01869fc54848b2e1c2a9"
	Dec 13 19:09:05 ha-605114 kubelet[806]: E1213 19:09:05.059732     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"re
cursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"ha-605114\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-605114/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:06 ha-605114 kubelet[806]: I1213 19:09:06.894025     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:06 ha-605114 kubelet[806]: E1213 19:09:06.894244     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-605114_kube-system(6b36430ebbfe01869fc54848b2e1c2a9)\"" pod="kube-system/kube-controller-manager-ha-605114" podUID="6b36430ebbfe01869fc54848b2e1c2a9"
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.933737     806 projected.go:196] Error preparing data for projected volume kube-api-access-sctl2 for pod kube-system/storage-provisioner: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.933838     806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2bdd28fc-c3f6-401d-9328-27dc669e196a-kube-api-access-sctl2 podName:2bdd28fc-c3f6-401d-9328-27dc669e196a nodeName:}" failed. No retries permitted until 2025-12-13 19:09:13.933816541 +0000 UTC m=+79.712758196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sctl2" (UniqueName: "kubernetes.io/projected/2bdd28fc-c3f6-401d-9328-27dc669e196a-kube-api-access-sctl2") pod "storage-provisioner" (UID: "2bdd28fc-c3f6-401d-9328-27dc669e196a") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934020     806 projected.go:196] Error preparing data for projected volume kube-api-access-4p9km for pod kube-system/coredns-66bc5c9577-85rpk: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934081     806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d7650f5f-c93c-4824-98ba-c6242f1d9595-kube-api-access-4p9km podName:d7650f5f-c93c-4824-98ba-c6242f1d9595 nodeName:}" failed. No retries permitted until 2025-12-13 19:09:13.934068028 +0000 UTC m=+79.713009674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4p9km" (UniqueName: "kubernetes.io/projected/d7650f5f-c93c-4824-98ba-c6242f1d9595-kube-api-access-4p9km") pod "coredns-66bc5c9577-85rpk" (UID: "d7650f5f-c93c-4824-98ba-c6242f1d9595") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934128     806 projected.go:196] Error preparing data for projected volume kube-api-access-rtb9w for pod default/busybox-7b57f96db7-h5qqv: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934157     806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b89d6cc7-836d-44be-997e-9a7fe221a5d8-kube-api-access-rtb9w podName:b89d6cc7-836d-44be-997e-9a7fe221a5d8 nodeName:}" failed. No retries permitted until 2025-12-13 19:09:13.934149422 +0000 UTC m=+79.713091069 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rtb9w" (UniqueName: "kubernetes.io/projected/b89d6cc7-836d-44be-997e-9a7fe221a5d8-kube-api-access-rtb9w") pod "busybox-7b57f96db7-h5qqv" (UID: "b89d6cc7-836d-44be-997e-9a7fe221a5d8") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:14 ha-605114 kubelet[806]: E1213 19:09:14.239262     806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-605114?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="200ms"
	Dec 13 19:09:15 ha-605114 kubelet[806]: E1213 19:09:15.060662     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-605114\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-605114?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:17 ha-605114 kubelet[806]: I1213 19:09:17.413956     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:17 ha-605114 kubelet[806]: E1213 19:09:17.414150     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-605114_kube-system(6b36430ebbfe01869fc54848b2e1c2a9)\"" pod="kube-system/kube-controller-manager-ha-605114" podUID="6b36430ebbfe01869fc54848b2e1c2a9"
	Dec 13 19:09:19 ha-605114 kubelet[806]: E1213 19:09:19.556378     806 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-605114.1880dbef376d6535  default   2620 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-605114,UID:ha-605114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-605114 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-605114,},FirstTimestamp:2025-12-13 19:07:54 +0000 UTC,LastTimestamp:2025-12-13 19:07:54.517705313 +0000 UTC m=+0.296646960,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-605114,}"
	Dec 13 19:09:24 ha-605114 kubelet[806]: E1213 19:09:24.441298     806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-605114?timeout=10s\": context deadline exceeded" interval="400ms"
	Dec 13 19:09:25 ha-605114 kubelet[806]: E1213 19:09:25.061462     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-605114\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-605114?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:31 ha-605114 kubelet[806]: I1213 19:09:31.414094     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:34 ha-605114 kubelet[806]: E1213 19:09:34.844103     806 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io ha-605114)" interval="800ms"
	Dec 13 19:09:35 ha-605114 kubelet[806]: E1213 19:09:35.061741     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-605114\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-605114?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:39 ha-605114 kubelet[806]: W1213 19:09:39.981430     806 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/crio-1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4 WatchSource:0}: Error finding container 1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4: Status 404 returned error can't find the container with id 1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-605114 -n ha-605114
helpers_test.go:270: (dbg) Run:  kubectl --context ha-605114 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-7b57f96db7-6ldgc busybox-7b57f96db7-jxpf7
helpers_test.go:283: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context ha-605114 describe pod busybox-7b57f96db7-6ldgc busybox-7b57f96db7-jxpf7
helpers_test.go:291: (dbg) kubectl --context ha-605114 describe pod busybox-7b57f96db7-6ldgc busybox-7b57f96db7-jxpf7:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-6ldgc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hsk8c (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hsk8c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  16s   default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	
	
	Name:             busybox-7b57f96db7-jxpf7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-696pr (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-696pr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  6s    default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:294: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (5.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (91.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 node add --control-plane --alsologtostderr -v 5
E1213 19:16:08.833158    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:16:42.459316    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 node add --control-plane --alsologtostderr -v 5: (1m25.016092562s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-605114 status --alsologtostderr -v 5: exit status 7 (825.964915ms)

                                                
                                                
-- stdout --
	ha-605114
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-605114-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-605114-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	ha-605114-m05
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:17:15.703484  112405 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:17:15.703667  112405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:17:15.703740  112405 out.go:374] Setting ErrFile to fd 2...
	I1213 19:17:15.703784  112405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:17:15.704231  112405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:17:15.704494  112405 out.go:368] Setting JSON to false
	I1213 19:17:15.704534  112405 mustload.go:66] Loading cluster: ha-605114
	I1213 19:17:15.705176  112405 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:17:15.705219  112405 status.go:174] checking status of ha-605114 ...
	I1213 19:17:15.705519  112405 notify.go:221] Checking for updates...
	I1213 19:17:15.705823  112405 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:17:15.727282  112405 status.go:371] ha-605114 host status = "Running" (err=<nil>)
	I1213 19:17:15.727304  112405 host.go:66] Checking if "ha-605114" exists ...
	I1213 19:17:15.727609  112405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:17:15.757726  112405 host.go:66] Checking if "ha-605114" exists ...
	I1213 19:17:15.758078  112405 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:17:15.758120  112405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:17:15.785154  112405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:17:15.891408  112405 ssh_runner.go:195] Run: systemctl --version
	I1213 19:17:15.898902  112405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:17:15.913833  112405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:17:15.978531  112405 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-13 19:17:15.968269703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:17:15.979110  112405 kubeconfig.go:125] found "ha-605114" server: "https://192.168.49.254:8443"
	I1213 19:17:15.979157  112405 api_server.go:166] Checking apiserver status ...
	I1213 19:17:15.979205  112405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:17:15.997763  112405 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/946/cgroup
	I1213 19:17:16.007005  112405 api_server.go:182] apiserver freezer: "6:freezer:/docker/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/crio/crio-3c729bb1538bfb45bc9b5542f5524916c96b118344d2be8a42e58a0bc6d4cb0d"
	I1213 19:17:16.007089  112405 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/crio/crio-3c729bb1538bfb45bc9b5542f5524916c96b118344d2be8a42e58a0bc6d4cb0d/freezer.state
	I1213 19:17:16.018184  112405 api_server.go:204] freezer state: "THAWED"
	I1213 19:17:16.018216  112405 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 19:17:16.026589  112405 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 19:17:16.026617  112405 status.go:463] ha-605114 apiserver status = Running (err=<nil>)
	I1213 19:17:16.026628  112405 status.go:176] ha-605114 status: &{Name:ha-605114 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:17:16.026643  112405 status.go:174] checking status of ha-605114-m02 ...
	I1213 19:17:16.026951  112405 cli_runner.go:164] Run: docker container inspect ha-605114-m02 --format={{.State.Status}}
	I1213 19:17:16.044632  112405 status.go:371] ha-605114-m02 host status = "Running" (err=<nil>)
	I1213 19:17:16.044658  112405 host.go:66] Checking if "ha-605114-m02" exists ...
	I1213 19:17:16.045122  112405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:17:16.064330  112405 host.go:66] Checking if "ha-605114-m02" exists ...
	I1213 19:17:16.064632  112405 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:17:16.064669  112405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:17:16.084918  112405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:17:16.194315  112405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:17:16.211922  112405 kubeconfig.go:125] found "ha-605114" server: "https://192.168.49.254:8443"
	I1213 19:17:16.211948  112405 api_server.go:166] Checking apiserver status ...
	I1213 19:17:16.211993  112405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 19:17:16.225510  112405 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:17:16.225532  112405 status.go:463] ha-605114-m02 apiserver status = Running (err=<nil>)
	I1213 19:17:16.225541  112405 status.go:176] ha-605114-m02 status: &{Name:ha-605114-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:17:16.225556  112405 status.go:174] checking status of ha-605114-m04 ...
	I1213 19:17:16.225868  112405 cli_runner.go:164] Run: docker container inspect ha-605114-m04 --format={{.State.Status}}
	I1213 19:17:16.245756  112405 status.go:371] ha-605114-m04 host status = "Stopped" (err=<nil>)
	I1213 19:17:16.245780  112405 status.go:384] host is not running, skipping remaining checks
	I1213 19:17:16.245795  112405 status.go:176] ha-605114-m04 status: &{Name:ha-605114-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:17:16.245824  112405 status.go:174] checking status of ha-605114-m05 ...
	I1213 19:17:16.246149  112405 cli_runner.go:164] Run: docker container inspect ha-605114-m05 --format={{.State.Status}}
	I1213 19:17:16.263826  112405 status.go:371] ha-605114-m05 host status = "Running" (err=<nil>)
	I1213 19:17:16.263849  112405 host.go:66] Checking if "ha-605114-m05" exists ...
	I1213 19:17:16.264155  112405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m05
	I1213 19:17:16.285858  112405 host.go:66] Checking if "ha-605114-m05" exists ...
	I1213 19:17:16.286914  112405 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:17:16.286977  112405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m05
	I1213 19:17:16.306766  112405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m05/id_rsa Username:docker}
	I1213 19:17:16.420399  112405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:17:16.434193  112405 kubeconfig.go:125] found "ha-605114" server: "https://192.168.49.254:8443"
	I1213 19:17:16.434236  112405 api_server.go:166] Checking apiserver status ...
	I1213 19:17:16.434278  112405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:17:16.447248  112405 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	I1213 19:17:16.455942  112405 api_server.go:182] apiserver freezer: "6:freezer:/docker/9e5b8952087bd720c19c9c65458a5dba2b02bb96c1e7878a2d9e40a9ffb961a8/crio/crio-37611d6faa4d865c72f3d03e2e9086e70d26ecc6039679f2a580470368fa3bbb"
	I1213 19:17:16.456015  112405 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9e5b8952087bd720c19c9c65458a5dba2b02bb96c1e7878a2d9e40a9ffb961a8/crio/crio-37611d6faa4d865c72f3d03e2e9086e70d26ecc6039679f2a580470368fa3bbb/freezer.state
	I1213 19:17:16.463592  112405 api_server.go:204] freezer state: "THAWED"
	I1213 19:17:16.463622  112405 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 19:17:16.473442  112405 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 19:17:16.473474  112405 status.go:463] ha-605114-m05 apiserver status = Running (err=<nil>)
	I1213 19:17:16.473484  112405 status.go:176] ha-605114-m05 status: &{Name:ha-605114-m05 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:615: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-605114 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-605114
helpers_test.go:244: (dbg) docker inspect ha-605114:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01",
	        "Created": "2025-12-13T18:58:54.586877202Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 93050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T19:07:47.614428932Z",
	            "FinishedAt": "2025-12-13T19:07:46.864889381Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/hosts",
	        "LogPath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01-json.log",
	        "Name": "/ha-605114",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-605114:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-605114",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01",
	                "LowerDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-605114",
	                "Source": "/var/lib/docker/volumes/ha-605114/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-605114",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-605114",
	                "name.minikube.sigs.k8s.io": "ha-605114",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c9ba4aac7e27f5373688f6fc1a7a905972eca17b43555a3811eba451288f742",
	            "SandboxKey": "/var/run/docker/netns/7c9ba4aac7e2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32833"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32834"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32836"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-605114": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:0b:16:d7:dc:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a2f3617b1da5e979c171e0e32faeb143b6ffd1484ed485ce26cb0c66c2f2f8d4",
	                    "EndpointID": "ad19576bfc7fdb2d25ff186edf415bfaa77021d19f2378c0078a6b8dd2c2a121",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-605114",
	                        "b8b77eca4604"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-605114 -n ha-605114
helpers_test.go:253: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 logs -n 25: (2.746147161s)
helpers_test.go:261: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-605114 ssh -n ha-605114-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test_ha-605114-m03_ha-605114-m04.txt                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp testdata/cp-test.txt ha-605114-m04:/home/docker/cp-test.txt                                                             │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1407969839/001/cp-test_ha-605114-m04.txt │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114:/home/docker/cp-test_ha-605114-m04_ha-605114.txt                       │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114 sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114.txt                                                 │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114-m02:/home/docker/cp-test_ha-605114-m04_ha-605114-m02.txt               │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m02 sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114-m02.txt                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114-m03:/home/docker/cp-test_ha-605114-m04_ha-605114-m03.txt               │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m03 sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114-m03.txt                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ node    │ ha-605114 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ node    │ ha-605114 node start m02 --alsologtostderr -v 5                                                                                      │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:04 UTC │
	│ node    │ ha-605114 node list --alsologtostderr -v 5                                                                                           │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:04 UTC │                     │
	│ stop    │ ha-605114 stop --alsologtostderr -v 5                                                                                                │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:04 UTC │ 13 Dec 25 19:05 UTC │
	│ start   │ ha-605114 start --wait true --alsologtostderr -v 5                                                                                   │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:05 UTC │ 13 Dec 25 19:06 UTC │
	│ node    │ ha-605114 node list --alsologtostderr -v 5                                                                                           │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:06 UTC │                     │
	│ node    │ ha-605114 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:06 UTC │ 13 Dec 25 19:07 UTC │
	│ stop    │ ha-605114 stop --alsologtostderr -v 5                                                                                                │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:07 UTC │ 13 Dec 25 19:07 UTC │
	│ start   │ ha-605114 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:07 UTC │                     │
	│ node    │ ha-605114 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:15 UTC │ 13 Dec 25 19:17 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 19:07:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:07:47.349427   92925 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:07:47.349751   92925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:07:47.349782   92925 out.go:374] Setting ErrFile to fd 2...
	I1213 19:07:47.349805   92925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:07:47.350088   92925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:07:47.350503   92925 out.go:368] Setting JSON to false
	I1213 19:07:47.351372   92925 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6620,"bootTime":1765646248,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 19:07:47.351472   92925 start.go:143] virtualization:  
	I1213 19:07:47.357175   92925 out.go:179] * [ha-605114] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 19:07:47.360285   92925 notify.go:221] Checking for updates...
	I1213 19:07:47.363188   92925 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 19:07:47.366066   92925 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:07:47.368997   92925 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:47.371939   92925 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 19:07:47.374564   92925 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 19:07:47.377424   92925 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:07:47.380815   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:47.381472   92925 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 19:07:47.411852   92925 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 19:07:47.411970   92925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:07:47.470115   92925 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-13 19:07:47.460445366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:07:47.470224   92925 docker.go:319] overlay module found
	I1213 19:07:47.473192   92925 out.go:179] * Using the docker driver based on existing profile
	I1213 19:07:47.475964   92925 start.go:309] selected driver: docker
	I1213 19:07:47.475980   92925 start.go:927] validating driver "docker" against &{Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:47.476125   92925 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:07:47.476235   92925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:07:47.532110   92925 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-13 19:07:47.522555398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:07:47.532550   92925 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:07:47.532582   92925 cni.go:84] Creating CNI manager for ""
	I1213 19:07:47.532636   92925 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1213 19:07:47.532689   92925 start.go:353] cluster config:
	{Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:47.537457   92925 out.go:179] * Starting "ha-605114" primary control-plane node in "ha-605114" cluster
	I1213 19:07:47.540151   92925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:07:47.542975   92925 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:07:47.545679   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:47.545731   92925 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 19:07:47.545743   92925 cache.go:65] Caching tarball of preloaded images
	I1213 19:07:47.545753   92925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:07:47.545828   92925 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:07:47.545838   92925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 19:07:47.545971   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:47.565319   92925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:07:47.565343   92925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:07:47.565364   92925 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:07:47.565392   92925 start.go:360] acquireMachinesLock for ha-605114: {Name:mk8d2cbed975abcdd5664438df80622381a361a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:07:47.565456   92925 start.go:364] duration metric: took 41.903µs to acquireMachinesLock for "ha-605114"
	I1213 19:07:47.565477   92925 start.go:96] Skipping create...Using existing machine configuration
	I1213 19:07:47.565483   92925 fix.go:54] fixHost starting: 
	I1213 19:07:47.565741   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:07:47.581688   92925 fix.go:112] recreateIfNeeded on ha-605114: state=Stopped err=<nil>
	W1213 19:07:47.581717   92925 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 19:07:47.584947   92925 out.go:252] * Restarting existing docker container for "ha-605114" ...
	I1213 19:07:47.585046   92925 cli_runner.go:164] Run: docker start ha-605114
	I1213 19:07:47.865372   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:07:47.883933   92925 kic.go:430] container "ha-605114" state is running.
	I1213 19:07:47.884352   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:47.906511   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:47.906746   92925 machine.go:94] provisionDockerMachine start ...
	I1213 19:07:47.906805   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:47.930498   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:47.930829   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:47.930842   92925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:07:47.931376   92925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46728->127.0.0.1:32833: read: connection reset by peer
	I1213 19:07:51.084950   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114
	
	I1213 19:07:51.084978   92925 ubuntu.go:182] provisioning hostname "ha-605114"
	I1213 19:07:51.085064   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.103183   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.103509   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.103523   92925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-605114 && echo "ha-605114" | sudo tee /etc/hostname
	I1213 19:07:51.262962   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114
	
	I1213 19:07:51.263080   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.281758   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.282067   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.282093   92925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-605114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-605114/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-605114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:07:51.433225   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:07:51.433251   92925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:07:51.433276   92925 ubuntu.go:190] setting up certificates
	I1213 19:07:51.433294   92925 provision.go:84] configureAuth start
	I1213 19:07:51.433356   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:51.451056   92925 provision.go:143] copyHostCerts
	I1213 19:07:51.451109   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:51.451157   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:07:51.451169   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:51.451244   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:07:51.451330   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:51.451351   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:07:51.451359   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:51.451387   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:07:51.451438   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:51.451459   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:07:51.451473   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:51.451505   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:07:51.451557   92925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.ha-605114 san=[127.0.0.1 192.168.49.2 ha-605114 localhost minikube]
	I1213 19:07:51.562646   92925 provision.go:177] copyRemoteCerts
	I1213 19:07:51.562709   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:07:51.562753   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.579816   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:51.684734   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 19:07:51.684815   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:07:51.703545   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 19:07:51.703625   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1213 19:07:51.721319   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 19:07:51.721382   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 19:07:51.738806   92925 provision.go:87] duration metric: took 305.496623ms to configureAuth
	I1213 19:07:51.738832   92925 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:07:51.739059   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:51.739152   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.756183   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.756478   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.756493   92925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:07:52.176419   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:07:52.176439   92925 machine.go:97] duration metric: took 4.269683244s to provisionDockerMachine
	I1213 19:07:52.176449   92925 start.go:293] postStartSetup for "ha-605114" (driver="docker")
	I1213 19:07:52.176460   92925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:07:52.176518   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:07:52.176563   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.201857   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.305092   92925 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:07:52.308224   92925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:07:52.308251   92925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:07:52.308263   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:07:52.308316   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:07:52.308413   92925 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:07:52.308423   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 19:07:52.308523   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:07:52.315982   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:07:52.333023   92925 start.go:296] duration metric: took 156.543018ms for postStartSetup
	I1213 19:07:52.333100   92925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:07:52.333150   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.353818   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.454237   92925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:07:52.459167   92925 fix.go:56] duration metric: took 4.893676995s for fixHost
	I1213 19:07:52.459203   92925 start.go:83] releasing machines lock for "ha-605114", held for 4.893726932s
	I1213 19:07:52.459271   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:52.475811   92925 ssh_runner.go:195] Run: cat /version.json
	I1213 19:07:52.475832   92925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:07:52.475868   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.475886   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.494277   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.499565   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.694122   92925 ssh_runner.go:195] Run: systemctl --version
	I1213 19:07:52.700676   92925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:07:52.737939   92925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:07:52.742564   92925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:07:52.742632   92925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:07:52.750413   92925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 19:07:52.750438   92925 start.go:496] detecting cgroup driver to use...
	I1213 19:07:52.750469   92925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:07:52.750516   92925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:07:52.765290   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:07:52.779600   92925 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:07:52.779718   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:07:52.795802   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:07:52.809441   92925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:07:52.921383   92925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:07:53.050247   92925 docker.go:234] disabling docker service ...
	I1213 19:07:53.050357   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:07:53.065412   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:07:53.078985   92925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:07:53.197041   92925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:07:53.312016   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:07:53.324873   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:07:53.338465   92925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:07:53.338566   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.348165   92925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:07:53.348244   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.357334   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.366113   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.375030   92925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:07:53.383092   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.392159   92925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.400500   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.409475   92925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:07:53.416937   92925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:07:53.424427   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:07:53.551020   92925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:07:53.724377   92925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:07:53.724453   92925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:07:53.728412   92925 start.go:564] Will wait 60s for crictl version
	I1213 19:07:53.728528   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:07:53.732393   92925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:07:53.759934   92925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:07:53.760022   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:07:53.792422   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:07:53.826233   92925 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 19:07:53.829188   92925 cli_runner.go:164] Run: docker network inspect ha-605114 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:07:53.845641   92925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:07:53.849708   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:07:53.860398   92925 kubeadm.go:884] updating cluster {Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:07:53.860545   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:53.860602   92925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:07:53.896899   92925 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:07:53.896925   92925 crio.go:433] Images already preloaded, skipping extraction
	I1213 19:07:53.896980   92925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:07:53.927660   92925 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:07:53.927686   92925 cache_images.go:86] Images are preloaded, skipping loading
	I1213 19:07:53.927694   92925 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 19:07:53.927835   92925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-605114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:07:53.927943   92925 ssh_runner.go:195] Run: crio config
	I1213 19:07:53.983293   92925 cni.go:84] Creating CNI manager for ""
	I1213 19:07:53.983320   92925 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1213 19:07:53.983344   92925 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 19:07:53.983367   92925 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-605114 NodeName:ha-605114 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:07:53.983512   92925 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-605114"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:07:53.983533   92925 kube-vip.go:115] generating kube-vip config ...
	I1213 19:07:53.983586   92925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1213 19:07:53.998146   92925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:07:53.998359   92925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 19:07:53.998456   92925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 19:07:54.007466   92925 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:07:54.007601   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1213 19:07:54.016257   92925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1213 19:07:54.030166   92925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:07:54.043943   92925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1213 19:07:54.057568   92925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1213 19:07:54.070913   92925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1213 19:07:54.074912   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:07:54.085321   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:07:54.204815   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:07:54.219656   92925 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114 for IP: 192.168.49.2
	I1213 19:07:54.219678   92925 certs.go:195] generating shared ca certs ...
	I1213 19:07:54.219703   92925 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.219837   92925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:07:54.219890   92925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:07:54.219904   92925 certs.go:257] generating profile certs ...
	I1213 19:07:54.219983   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key
	I1213 19:07:54.220016   92925 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc
	I1213 19:07:54.220035   92925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1213 19:07:54.524208   92925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc ...
	I1213 19:07:54.524279   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc: {Name:mk2a78acb3455aba2154553b94cc00acb06ef2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.524506   92925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc ...
	I1213 19:07:54.524551   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc: {Name:mk04e3ed8a0db9ab16dbffd5c3b9073d491094e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.524690   92925 certs.go:382] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt
	I1213 19:07:54.524872   92925 certs.go:386] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key
	I1213 19:07:54.525075   92925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key
	I1213 19:07:54.525118   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 19:07:54.525152   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 19:07:54.525194   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 19:07:54.525228   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 19:07:54.525260   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 19:07:54.525307   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 19:07:54.525343   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 19:07:54.525371   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 19:07:54.525461   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:07:54.525519   92925 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:07:54.525567   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:07:54.525619   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:07:54.525684   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:07:54.525769   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:07:54.525903   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:07:54.525966   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.526009   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.526041   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.526676   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:07:54.547219   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:07:54.566530   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:07:54.584290   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:07:54.601920   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 19:07:54.619619   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:07:54.637359   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:07:54.654838   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:07:54.674423   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:07:54.692475   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:07:54.711269   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:07:54.730584   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:07:54.744548   92925 ssh_runner.go:195] Run: openssl version
	I1213 19:07:54.750950   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.759097   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:07:54.766678   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.770469   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.770573   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.811925   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:07:54.820248   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.829596   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:07:54.843944   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.848466   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.848527   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.910394   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:07:54.922018   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.934942   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:07:54.943147   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.953686   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.953799   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:07:55.020871   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:07:55.034570   92925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:07:55.045312   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 19:07:55.146347   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 19:07:55.197938   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 19:07:55.240888   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 19:07:55.293579   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 19:07:55.349397   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 19:07:55.405749   92925 kubeadm.go:401] StartCluster: {Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:55.405941   92925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:07:55.406039   92925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:07:55.476432   92925 cri.go:89] found id: "23b44f60db0dc9ad888430163cce4adc2cef45e4fff10aded1fd37e36e5d5955"
	I1213 19:07:55.476492   92925 cri.go:89] found id: "9a81ddd488bb7e9ca9d20cc8af4e9414463f3bf2bd40edd26c2e9395f731a3ec"
	I1213 19:07:55.476519   92925 cri.go:89] found id: "ee202abc8dba3b97ac56d7c3063ce4fae0734134ba47b9d6070588c897f7baf0"
	I1213 19:07:55.476536   92925 cri.go:89] found id: "3c729bb1538bfb45bc9b5542f5524916c96b118344d2be8a42e58a0bc6d4cb0d"
	I1213 19:07:55.476570   92925 cri.go:89] found id: "2b3744a5aa7a90a9d9036f0de528d8ed7e951f80254fa43fd57f666e0a6ccc86"
	I1213 19:07:55.476591   92925 cri.go:89] found id: ""
	I1213 19:07:55.476674   92925 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 19:07:55.502827   92925 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T19:07:55Z" level=error msg="open /run/runc: no such file or directory"
	I1213 19:07:55.502965   92925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:07:55.514772   92925 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 19:07:55.514841   92925 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 19:07:55.514932   92925 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 19:07:55.530907   92925 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:07:55.531414   92925 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-605114" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:55.531569   92925 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-2686/kubeconfig needs updating (will repair): [kubeconfig missing "ha-605114" cluster setting kubeconfig missing "ha-605114" context setting]
	I1213 19:07:55.531908   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.532529   92925 kapi.go:59] client config for ha-605114: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 19:07:55.533545   92925 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 19:07:55.533623   92925 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 19:07:55.533709   92925 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 19:07:55.533743   92925 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 19:07:55.533762   92925 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 19:07:55.533784   92925 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 19:07:55.534156   92925 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 19:07:55.550155   92925 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 19:07:55.550227   92925 kubeadm.go:602] duration metric: took 35.349185ms to restartPrimaryControlPlane
	I1213 19:07:55.550251   92925 kubeadm.go:403] duration metric: took 144.511847ms to StartCluster
	I1213 19:07:55.550281   92925 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.550405   92925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:55.551146   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.551412   92925 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:07:55.551467   92925 start.go:242] waiting for startup goroutines ...
	I1213 19:07:55.551494   92925 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 19:07:55.552092   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:55.557393   92925 out.go:179] * Enabled addons: 
	I1213 19:07:55.560282   92925 addons.go:530] duration metric: took 8.786078ms for enable addons: enabled=[]
	I1213 19:07:55.560370   92925 start.go:247] waiting for cluster config update ...
	I1213 19:07:55.560416   92925 start.go:256] writing updated cluster config ...
	I1213 19:07:55.563604   92925 out.go:203] 
	I1213 19:07:55.566673   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:55.566871   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:55.570151   92925 out.go:179] * Starting "ha-605114-m02" control-plane node in "ha-605114" cluster
	I1213 19:07:55.572987   92925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:07:55.575841   92925 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:07:55.578800   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:55.578823   92925 cache.go:65] Caching tarball of preloaded images
	I1213 19:07:55.578933   92925 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:07:55.578943   92925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 19:07:55.579063   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:55.579269   92925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:07:55.599207   92925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:07:55.599233   92925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:07:55.599247   92925 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:07:55.599269   92925 start.go:360] acquireMachinesLock for ha-605114-m02: {Name:mk43db0c2b2ac44e0e8dc9a68aa6922f0bb2fccb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:07:55.599325   92925 start.go:364] duration metric: took 36.989µs to acquireMachinesLock for "ha-605114-m02"
	I1213 19:07:55.599348   92925 start.go:96] Skipping create...Using existing machine configuration
	I1213 19:07:55.599358   92925 fix.go:54] fixHost starting: m02
	I1213 19:07:55.599613   92925 cli_runner.go:164] Run: docker container inspect ha-605114-m02 --format={{.State.Status}}
	I1213 19:07:55.630999   92925 fix.go:112] recreateIfNeeded on ha-605114-m02: state=Stopped err=<nil>
	W1213 19:07:55.631030   92925 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 19:07:55.634239   92925 out.go:252] * Restarting existing docker container for "ha-605114-m02" ...
	I1213 19:07:55.634323   92925 cli_runner.go:164] Run: docker start ha-605114-m02
	I1213 19:07:56.013613   92925 cli_runner.go:164] Run: docker container inspect ha-605114-m02 --format={{.State.Status}}
	I1213 19:07:56.043229   92925 kic.go:430] container "ha-605114-m02" state is running.
	I1213 19:07:56.043952   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:07:56.072863   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:56.073198   92925 machine.go:94] provisionDockerMachine start ...
	I1213 19:07:56.073260   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:56.107315   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:56.107694   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:56.107711   92925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:07:56.108441   92925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 19:07:59.320519   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114-m02
	
	I1213 19:07:59.320540   92925 ubuntu.go:182] provisioning hostname "ha-605114-m02"
	I1213 19:07:59.320600   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.354148   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:59.354465   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:59.354476   92925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-605114-m02 && echo "ha-605114-m02" | sudo tee /etc/hostname
	I1213 19:07:59.560753   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114-m02
	
	I1213 19:07:59.560835   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.590681   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:59.590982   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:59.590997   92925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-605114-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-605114-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-605114-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:07:59.777428   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:07:59.777502   92925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:07:59.777532   92925 ubuntu.go:190] setting up certificates
	I1213 19:07:59.777573   92925 provision.go:84] configureAuth start
	I1213 19:07:59.777669   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:07:59.806547   92925 provision.go:143] copyHostCerts
	I1213 19:07:59.806589   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:59.806621   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:07:59.806628   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:59.806709   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:07:59.806788   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:59.806805   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:07:59.806810   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:59.806854   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:07:59.806898   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:59.806916   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:07:59.806920   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:59.806944   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:07:59.806989   92925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.ha-605114-m02 san=[127.0.0.1 192.168.49.3 ha-605114-m02 localhost minikube]
	I1213 19:07:59.961185   92925 provision.go:177] copyRemoteCerts
	I1213 19:07:59.961261   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:07:59.961306   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.986810   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:00.131955   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 19:08:00.132032   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:08:00.173539   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 19:08:00.173623   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 19:08:00.207894   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 19:08:00.207965   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 19:08:00.244666   92925 provision.go:87] duration metric: took 467.054938ms to configureAuth
	I1213 19:08:00.244712   92925 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:08:00.245918   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:08:00.246082   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:00.327171   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:08:00.327492   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:08:00.327508   92925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:08:01.970074   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:08:01.970150   92925 machine.go:97] duration metric: took 5.896940025s to provisionDockerMachine
	I1213 19:08:01.970177   92925 start.go:293] postStartSetup for "ha-605114-m02" (driver="docker")
	I1213 19:08:01.970221   92925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:08:01.970316   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:08:01.970411   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.009089   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.129494   92925 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:08:02.136549   92925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:08:02.136573   92925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:08:02.136585   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:08:02.136646   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:08:02.136728   92925 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:08:02.136734   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 19:08:02.136842   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:08:02.171248   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:08:02.216469   92925 start.go:296] duration metric: took 246.261152ms for postStartSetup
	I1213 19:08:02.216625   92925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:08:02.216685   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.262639   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.374718   92925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:08:02.380084   92925 fix.go:56] duration metric: took 6.780718951s for fixHost
	I1213 19:08:02.380108   92925 start.go:83] releasing machines lock for "ha-605114-m02", held for 6.780770726s
	I1213 19:08:02.380176   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:08:02.401071   92925 out.go:179] * Found network options:
	I1213 19:08:02.404164   92925 out.go:179]   - NO_PROXY=192.168.49.2
	W1213 19:08:02.407079   92925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1213 19:08:02.407127   92925 proxy.go:120] fail to check proxy env: Error ip not in block
	I1213 19:08:02.407198   92925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:08:02.407241   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.407257   92925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:08:02.407313   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.441677   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.462715   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.700903   92925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:08:02.788606   92925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:08:02.788680   92925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:08:02.802406   92925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 19:08:02.802471   92925 start.go:496] detecting cgroup driver to use...
	I1213 19:08:02.802520   92925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:08:02.802599   92925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:08:02.821557   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:08:02.843971   92925 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:08:02.844081   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:08:02.866953   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:08:02.884909   92925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:08:03.137948   92925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:08:03.363884   92925 docker.go:234] disabling docker service ...
	I1213 19:08:03.363990   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:08:03.388880   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:08:03.405597   92925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:08:03.645933   92925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:08:03.919704   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:08:03.941774   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:08:03.972913   92925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:08:03.973103   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:03.988083   92925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:08:03.988256   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.019667   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.031645   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.049709   92925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:08:04.086713   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.109181   92925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.119963   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.154436   92925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:08:04.170086   92925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:08:04.191001   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:08:04.484381   92925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:09:34.781930   92925 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.297515083s)
	I1213 19:09:34.781956   92925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:09:34.782006   92925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:09:34.785743   92925 start.go:564] Will wait 60s for crictl version
	I1213 19:09:34.785812   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:09:34.789353   92925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:09:34.818524   92925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:09:34.818612   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:09:34.852441   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:09:34.887257   92925 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 19:09:34.890293   92925 out.go:179]   - env NO_PROXY=192.168.49.2
	I1213 19:09:34.893426   92925 cli_runner.go:164] Run: docker network inspect ha-605114 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:09:34.911684   92925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:09:34.915601   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:09:34.925402   92925 mustload.go:66] Loading cluster: ha-605114
	I1213 19:09:34.925637   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:09:34.925900   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:09:34.944458   92925 host.go:66] Checking if "ha-605114" exists ...
	I1213 19:09:34.944731   92925 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114 for IP: 192.168.49.3
	I1213 19:09:34.944745   92925 certs.go:195] generating shared ca certs ...
	I1213 19:09:34.944760   92925 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:09:34.944889   92925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:09:34.944944   92925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:09:34.944957   92925 certs.go:257] generating profile certs ...
	I1213 19:09:34.945069   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key
	I1213 19:09:34.945157   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.29c07aea
	I1213 19:09:34.945202   92925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key
	I1213 19:09:34.945215   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 19:09:34.945230   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 19:09:34.945254   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 19:09:34.945266   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 19:09:34.945281   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 19:09:34.945294   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 19:09:34.945309   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 19:09:34.945328   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 19:09:34.945383   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:09:34.945424   92925 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:09:34.945446   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:09:34.945479   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:09:34.945508   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:09:34.945538   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:09:34.945583   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:09:34.945616   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:34.945632   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 19:09:34.945649   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 19:09:34.945719   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:09:34.963328   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:09:35.065324   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1213 19:09:35.069081   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1213 19:09:35.077819   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1213 19:09:35.081455   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1213 19:09:35.089763   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1213 19:09:35.093612   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1213 19:09:35.102260   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1213 19:09:35.106728   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1213 19:09:35.115519   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1213 19:09:35.119196   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1213 19:09:35.129001   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1213 19:09:35.132624   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1213 19:09:35.141653   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:09:35.161897   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:09:35.182131   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:09:35.202060   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:09:35.222310   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 19:09:35.243497   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:09:35.265517   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:09:35.284987   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:09:35.302971   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:09:35.320388   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:09:35.338865   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:09:35.356332   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1213 19:09:35.369616   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1213 19:09:35.383108   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1213 19:09:35.396928   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1213 19:09:35.410529   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1213 19:09:35.423162   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1213 19:09:35.436667   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1213 19:09:35.450451   92925 ssh_runner.go:195] Run: openssl version
	I1213 19:09:35.457142   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.464516   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:09:35.472169   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.475920   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.475984   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.516956   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:09:35.524426   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.532136   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:09:35.539767   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.543798   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.543906   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.586837   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:09:35.594791   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.602550   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:09:35.610984   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.614895   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.614973   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.661484   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:09:35.668847   92925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:09:35.672924   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 19:09:35.714926   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 19:09:35.757278   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 19:09:35.798060   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 19:09:35.840340   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 19:09:35.883228   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 19:09:35.926498   92925 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1213 19:09:35.926597   92925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-605114-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:09:35.926628   92925 kube-vip.go:115] generating kube-vip config ...
	I1213 19:09:35.926680   92925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1213 19:09:35.939407   92925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:09:35.939464   92925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 19:09:35.939538   92925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 19:09:35.948342   92925 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:09:35.948446   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1213 19:09:35.956523   92925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 19:09:35.970227   92925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:09:35.985384   92925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1213 19:09:36.004385   92925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1213 19:09:36.008483   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:09:36.019218   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:09:36.155982   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:09:36.170330   92925 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:09:36.170793   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:09:36.174251   92925 out.go:179] * Verifying Kubernetes components...
	I1213 19:09:36.177213   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:09:36.319740   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:09:36.334811   92925 kapi.go:59] client config for ha-605114: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1213 19:09:36.334886   92925 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1213 19:09:36.335095   92925 node_ready.go:35] waiting up to 6m0s for node "ha-605114-m02" to be "Ready" ...
	I1213 19:09:39.281934   92925 node_ready.go:49] node "ha-605114-m02" is "Ready"
	I1213 19:09:39.281962   92925 node_ready.go:38] duration metric: took 2.946847766s for node "ha-605114-m02" to be "Ready" ...
	I1213 19:09:39.281975   92925 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:09:39.282034   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:39.782149   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:40.282856   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:40.782144   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:41.282958   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:41.782581   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:42.282264   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:42.782257   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:43.283132   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:43.782112   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:44.282168   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:44.782088   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:45.282593   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:45.782122   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:46.282927   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:46.782182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:47.282980   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:47.783112   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:48.282633   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:48.782211   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:49.282732   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:49.782187   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:50.282735   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:50.782142   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:51.282519   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:51.782152   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:52.282197   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:52.782636   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:53.282768   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:53.782116   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:54.282300   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:54.782182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:55.282883   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:55.783092   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:56.282203   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:56.783098   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:57.282717   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:57.782189   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:58.282252   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:58.782909   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:59.282100   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:59.782310   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:00.289145   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:00.782212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:01.282192   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:01.782760   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:02.282108   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:02.782972   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:03.282353   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:03.782328   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:04.282366   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:04.782174   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:05.282835   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:05.782488   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:06.283036   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:06.782436   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:07.282292   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:07.782212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:08.283033   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:08.783070   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:09.282897   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:09.782668   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:10.282222   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:10.782267   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:11.282198   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:11.782837   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:12.282212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:12.783009   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:13.282406   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:13.782556   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:14.283140   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:14.782783   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:15.283077   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:15.783150   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:16.282934   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:16.783092   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:17.282186   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:17.782253   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:18.282771   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:18.782339   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:19.282255   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:19.782254   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:20.282346   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:20.782992   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:21.282270   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:21.782169   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:22.282176   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:22.782681   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:23.282402   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:23.783116   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:24.282118   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:24.782962   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:25.283031   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:25.783024   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:26.283105   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:26.782110   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:27.282833   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:27.782332   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:28.282978   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:28.782284   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:29.283095   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:29.782866   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:30.282438   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:30.782580   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:31.282697   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:31.783148   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:32.283119   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:32.782971   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:33.282108   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:33.783088   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:34.283075   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:34.782667   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:35.282868   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:35.782514   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:36.282200   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:36.282308   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:36.311092   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:36.311117   92925 cri.go:89] found id: ""
	I1213 19:10:36.311125   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:36.311180   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.314888   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:36.314970   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:36.342553   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:36.342573   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:36.342578   92925 cri.go:89] found id: ""
	I1213 19:10:36.342586   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:36.342655   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.346486   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.349986   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:36.350061   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:36.375198   92925 cri.go:89] found id: ""
	I1213 19:10:36.375262   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.375275   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:36.375281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:36.375350   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:36.406767   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:36.406789   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:36.406794   92925 cri.go:89] found id: ""
	I1213 19:10:36.406801   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:36.406857   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.410743   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.414390   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:36.414490   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:36.441810   92925 cri.go:89] found id: ""
	I1213 19:10:36.441833   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.441841   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:36.441848   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:36.441911   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:36.468354   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:36.468374   92925 cri.go:89] found id: ""
	I1213 19:10:36.468382   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:36.468436   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.472238   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:36.472316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:36.500356   92925 cri.go:89] found id: ""
	I1213 19:10:36.500383   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.500394   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:36.500404   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:36.500414   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:36.593811   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:36.593845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:36.607625   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:36.607656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:37.031907   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:37.023726    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.024402    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.025999    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.026604    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.028296    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:37.023726    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.024402    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.025999    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.026604    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.028296    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:37.031933   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:37.031948   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:37.057050   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:37.057079   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:37.097228   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:37.097262   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:37.148963   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:37.149014   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:37.217399   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:37.217436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:37.248174   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:37.248203   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:37.274722   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:37.274748   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:37.355342   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:37.355379   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:39.885413   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:39.896181   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:39.896250   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:39.928054   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:39.928078   92925 cri.go:89] found id: ""
	I1213 19:10:39.928087   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:39.928142   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.932690   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:39.932760   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:39.962089   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:39.962110   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:39.962114   92925 cri.go:89] found id: ""
	I1213 19:10:39.962122   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:39.962178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.966008   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.970141   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:39.970211   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:40.031915   92925 cri.go:89] found id: ""
	I1213 19:10:40.031938   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.031947   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:40.031954   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:40.032022   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:40.075124   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:40.075145   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:40.075150   92925 cri.go:89] found id: ""
	I1213 19:10:40.075157   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:40.075216   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.079588   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.083956   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:40.084077   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:40.120592   92925 cri.go:89] found id: ""
	I1213 19:10:40.120623   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.120633   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:40.120640   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:40.120707   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:40.162573   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:40.162599   92925 cri.go:89] found id: ""
	I1213 19:10:40.162620   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:40.162692   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.167731   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:40.167810   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:40.197646   92925 cri.go:89] found id: ""
	I1213 19:10:40.197681   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.197692   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:40.197701   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:40.197714   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:40.279428   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:40.270096    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.270945    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.271678    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.273521    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.274072    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:40.270096    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.270945    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.271678    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.273521    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.274072    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:40.279462   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:40.279476   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:40.317833   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:40.317867   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:40.365303   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:40.365339   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:40.391972   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:40.392006   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:40.467785   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:40.467824   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:40.499555   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:40.499587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:40.601537   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:40.601571   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:40.614326   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:40.614357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:40.643794   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:40.643823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:40.696205   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:40.696242   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.224045   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:43.234786   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:43.234854   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:43.262459   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:43.262481   92925 cri.go:89] found id: ""
	I1213 19:10:43.262489   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:43.262544   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.267289   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:43.267362   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:43.294825   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:43.294846   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:43.294858   92925 cri.go:89] found id: ""
	I1213 19:10:43.294873   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:43.294931   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.298717   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.302500   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:43.302576   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:43.328978   92925 cri.go:89] found id: ""
	I1213 19:10:43.329001   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.329048   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:43.329055   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:43.329115   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:43.358394   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:43.358419   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.358426   92925 cri.go:89] found id: ""
	I1213 19:10:43.358434   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:43.358544   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.363176   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.366906   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:43.366996   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:43.396556   92925 cri.go:89] found id: ""
	I1213 19:10:43.396583   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.396592   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:43.396598   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:43.396657   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:43.422776   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:43.422803   92925 cri.go:89] found id: ""
	I1213 19:10:43.422813   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:43.422886   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.426512   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:43.426579   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:43.452942   92925 cri.go:89] found id: ""
	I1213 19:10:43.452966   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.452975   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:43.452984   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:43.452996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:43.479637   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:43.479708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:43.492492   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:43.492521   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:43.555898   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:43.555930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.583059   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:43.583089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:43.665528   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:43.665562   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:43.713108   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:43.713136   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:43.817894   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:43.817930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:43.900953   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:43.892916    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.893797    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895356    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895650    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.897247    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:43.892916    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.893797    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895356    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895650    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.897247    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:43.900978   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:43.900992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:43.928040   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:43.928067   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:43.989295   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:43.989349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:46.551759   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:46.562922   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:46.562999   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:46.590576   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:46.590607   92925 cri.go:89] found id: ""
	I1213 19:10:46.590615   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:46.590669   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.594481   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:46.594557   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:46.619444   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:46.619466   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:46.619472   92925 cri.go:89] found id: ""
	I1213 19:10:46.619480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:46.619562   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.623350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.626652   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:46.626726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:46.655019   92925 cri.go:89] found id: ""
	I1213 19:10:46.655045   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.655055   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:46.655061   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:46.655119   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:46.685081   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:46.685108   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:46.685113   92925 cri.go:89] found id: ""
	I1213 19:10:46.685121   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:46.685178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.689664   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.693381   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:46.693455   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:46.719871   92925 cri.go:89] found id: ""
	I1213 19:10:46.719897   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.719906   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:46.719914   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:46.719979   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:46.747153   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:46.747176   92925 cri.go:89] found id: ""
	I1213 19:10:46.747184   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:46.747239   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.751093   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:46.751198   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:46.777729   92925 cri.go:89] found id: ""
	I1213 19:10:46.777800   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.777816   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:46.777827   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:46.777840   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:46.807286   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:46.807315   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:46.900226   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:46.900266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:46.913850   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:46.913877   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:46.995097   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:46.986432    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.987537    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.988185    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.989944    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.990430    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:46.986432    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.987537    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.988185    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.989944    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.990430    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:46.995121   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:46.995146   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:47.020980   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:47.021038   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:47.062312   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:47.062348   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:47.143840   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:47.143916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:47.176420   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:47.176455   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:47.221958   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:47.222003   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:47.276308   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:47.276349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:49.804769   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:49.815535   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:49.815609   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:49.841153   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:49.841227   92925 cri.go:89] found id: ""
	I1213 19:10:49.841258   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:49.841341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.844798   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:49.844903   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:49.872086   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:49.872111   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:49.872117   92925 cri.go:89] found id: ""
	I1213 19:10:49.872124   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:49.872178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.875975   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.879817   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:49.879892   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:49.918961   92925 cri.go:89] found id: ""
	I1213 19:10:49.918987   92925 logs.go:282] 0 containers: []
	W1213 19:10:49.918996   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:49.919002   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:49.919059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:49.959969   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:49.959994   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:49.959999   92925 cri.go:89] found id: ""
	I1213 19:10:49.960007   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:49.960063   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.964635   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.969140   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:49.969208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:50.006023   92925 cri.go:89] found id: ""
	I1213 19:10:50.006049   92925 logs.go:282] 0 containers: []
	W1213 19:10:50.006058   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:50.006064   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:50.006143   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:50.040945   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:50.040965   92925 cri.go:89] found id: ""
	I1213 19:10:50.040973   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:50.041060   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:50.044991   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:50.045100   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:50.073352   92925 cri.go:89] found id: ""
	I1213 19:10:50.073383   92925 logs.go:282] 0 containers: []
	W1213 19:10:50.073409   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:50.073420   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:50.073437   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:50.092169   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:50.092219   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:50.167681   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:50.167719   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:50.220989   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:50.221028   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:50.252059   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:50.252091   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:50.358508   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:50.358555   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:50.434424   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:50.426219    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.426850    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.428449    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.429020    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.430880    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:50.426219    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.426850    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.428449    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.429020    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.430880    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:50.434452   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:50.434467   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:50.458963   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:50.458992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:50.516376   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:50.516410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:50.543978   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:50.544009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:50.619429   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:50.619468   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:53.153421   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:53.163979   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:53.164048   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:53.191198   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:53.191259   92925 cri.go:89] found id: ""
	I1213 19:10:53.191291   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:53.191363   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.195132   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:53.195204   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:53.222253   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:53.222276   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:53.222280   92925 cri.go:89] found id: ""
	I1213 19:10:53.222287   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:53.222370   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.226176   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.229762   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:53.229878   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:53.260062   92925 cri.go:89] found id: ""
	I1213 19:10:53.260088   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.260096   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:53.260103   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:53.260159   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:53.289940   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:53.290005   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:53.290024   92925 cri.go:89] found id: ""
	I1213 19:10:53.290037   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:53.290106   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.293745   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.297116   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:53.297199   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:53.324233   92925 cri.go:89] found id: ""
	I1213 19:10:53.324259   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.324268   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:53.324274   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:53.324329   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:53.355230   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:53.355252   92925 cri.go:89] found id: ""
	I1213 19:10:53.355260   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:53.355312   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.358865   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:53.358932   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:53.388377   92925 cri.go:89] found id: ""
	I1213 19:10:53.388460   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.388486   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:53.388531   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:53.388561   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:53.482197   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:53.482233   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:53.495635   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:53.495666   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:53.527174   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:53.527201   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:53.568473   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:53.568509   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:53.613038   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:53.613068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:53.666213   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:53.666248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:53.746993   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:53.747031   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:53.777726   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:53.777758   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:53.849162   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:53.840835    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.841725    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.842564    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844081    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844396    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:53.840835    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.841725    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.842564    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844081    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844396    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:53.849193   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:53.849207   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:53.879522   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:53.879551   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.408599   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:56.420063   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:56.420130   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:56.446598   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:56.446622   92925 cri.go:89] found id: ""
	I1213 19:10:56.446630   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:56.446691   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.450451   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:56.450519   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:56.477437   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:56.477460   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:56.477465   92925 cri.go:89] found id: ""
	I1213 19:10:56.477472   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:56.477560   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.481341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.484891   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:56.484963   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:56.513437   92925 cri.go:89] found id: ""
	I1213 19:10:56.513459   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.513467   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:56.513473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:56.513531   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:56.542772   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:56.542812   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:56.542818   92925 cri.go:89] found id: ""
	I1213 19:10:56.542845   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:56.542930   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.546773   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.550355   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:56.550430   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:56.577663   92925 cri.go:89] found id: ""
	I1213 19:10:56.577687   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.577695   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:56.577701   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:56.577811   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:56.604755   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.604827   92925 cri.go:89] found id: ""
	I1213 19:10:56.604849   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:56.604945   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.608549   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:56.608618   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:56.635735   92925 cri.go:89] found id: ""
	I1213 19:10:56.635759   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.635767   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:56.635777   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:56.635789   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:56.729353   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:56.729388   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:56.741845   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:56.741874   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:56.815151   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:56.806729    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.807450    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.808916    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.809436    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.811611    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:56.806729    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.807450    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.808916    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.809436    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.811611    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:56.815178   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:56.815193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:56.871711   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:56.871748   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.904003   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:56.904034   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:56.941519   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:56.941549   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:56.974994   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:56.975022   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:57.015259   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:57.015290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:57.059492   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:57.059527   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:57.085661   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:57.085690   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:59.675412   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:59.686117   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:59.686192   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:59.710921   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:59.710951   92925 cri.go:89] found id: ""
	I1213 19:10:59.710960   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:59.711015   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.714894   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:59.715008   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:59.742170   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:59.742193   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:59.742199   92925 cri.go:89] found id: ""
	I1213 19:10:59.742206   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:59.742261   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.746138   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.750866   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:59.750942   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:59.777917   92925 cri.go:89] found id: ""
	I1213 19:10:59.777943   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.777951   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:59.777957   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:59.778015   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:59.803883   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:59.803903   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:59.803908   92925 cri.go:89] found id: ""
	I1213 19:10:59.803916   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:59.803971   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.807903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.811388   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:59.811453   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:59.837952   92925 cri.go:89] found id: ""
	I1213 19:10:59.837977   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.837986   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:59.837992   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:59.838048   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:59.864431   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:59.864490   92925 cri.go:89] found id: ""
	I1213 19:10:59.864512   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:59.864594   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.869272   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:59.869345   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:59.896571   92925 cri.go:89] found id: ""
	I1213 19:10:59.896603   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.896612   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:59.896622   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:59.896634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:59.997222   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:59.997313   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:00.122051   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:00.122166   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:00.334228   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:00.323858    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.324625    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326029    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326896    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.328835    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:00.323858    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.324625    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326029    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326896    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.328835    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:00.334270   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:00.334284   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:00.397345   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:00.397381   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:00.460082   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:00.460118   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:00.507030   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:00.507068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:00.561579   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:00.561611   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:00.590319   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:00.590346   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:00.618590   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:00.618617   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:00.700620   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:00.700655   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:03.247538   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:03.260650   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:03.260720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:03.296710   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:03.296736   92925 cri.go:89] found id: ""
	I1213 19:11:03.296744   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:03.296804   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.300974   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:03.301083   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:03.332989   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:03.333019   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:03.333024   92925 cri.go:89] found id: ""
	I1213 19:11:03.333031   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:03.333085   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.337959   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.341569   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:03.341642   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:03.367805   92925 cri.go:89] found id: ""
	I1213 19:11:03.367831   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.367840   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:03.367847   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:03.367910   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:03.396144   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:03.396165   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:03.396170   92925 cri.go:89] found id: ""
	I1213 19:11:03.396177   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:03.396234   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.400643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.404350   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:03.404422   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:03.431472   92925 cri.go:89] found id: ""
	I1213 19:11:03.431498   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.431508   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:03.431520   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:03.431602   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:03.459968   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:03.460034   92925 cri.go:89] found id: ""
	I1213 19:11:03.460058   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:03.460134   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.464138   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:03.464230   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:03.491871   92925 cri.go:89] found id: ""
	I1213 19:11:03.491897   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.491906   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:03.491916   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:03.491928   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:03.528376   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:03.528451   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:03.562095   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:03.562124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:03.575381   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:03.575410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:03.602586   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:03.602615   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:03.651880   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:03.651912   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:03.708104   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:03.708142   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:03.736240   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:03.736268   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:03.814277   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:03.814314   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:03.920505   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:03.920542   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:04.025281   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:04.014467    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.015603    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.016913    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.017960    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.019083    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:04.014467    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.015603    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.016913    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.017960    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.019083    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:04.025308   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:04.025326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.584492   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:06.595822   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:06.595900   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:06.627891   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:06.627917   92925 cri.go:89] found id: ""
	I1213 19:11:06.627925   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:06.627982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.632107   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:06.632184   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:06.657896   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:06.657921   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.657926   92925 cri.go:89] found id: ""
	I1213 19:11:06.657934   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:06.657989   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.661493   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.665545   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:06.665611   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:06.696673   92925 cri.go:89] found id: ""
	I1213 19:11:06.696748   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.696773   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:06.696792   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:06.696879   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:06.724330   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:06.724355   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:06.724360   92925 cri.go:89] found id: ""
	I1213 19:11:06.724368   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:06.724422   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.728040   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.731506   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:06.731610   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:06.756515   92925 cri.go:89] found id: ""
	I1213 19:11:06.756578   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.756601   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:06.756622   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:06.756700   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:06.783035   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:06.783094   92925 cri.go:89] found id: ""
	I1213 19:11:06.783117   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:06.783184   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.787082   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:06.787158   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:06.813991   92925 cri.go:89] found id: ""
	I1213 19:11:06.814014   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.814022   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:06.814031   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:06.814043   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.860023   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:06.860057   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:06.915266   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:06.915303   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:07.005436   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:07.005480   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:07.041558   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:07.041591   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:07.055111   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:07.055140   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:07.085506   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:07.085534   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:07.140042   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:07.140080   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:07.170267   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:07.170300   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:07.197645   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:07.197676   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:07.298125   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:07.298167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:07.368495   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:07.358879    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.359581    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361161    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361458    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.363677    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:07.358879    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.359581    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361161    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361458    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.363677    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:09.868760   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:09.879760   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:09.879831   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:09.907241   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:09.907264   92925 cri.go:89] found id: ""
	I1213 19:11:09.907272   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:09.907331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.910883   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:09.910954   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:09.936137   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:09.936156   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:09.936161   92925 cri.go:89] found id: ""
	I1213 19:11:09.936167   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:09.936222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.940048   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.951154   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:09.951222   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:09.985435   92925 cri.go:89] found id: ""
	I1213 19:11:09.985520   92925 logs.go:282] 0 containers: []
	W1213 19:11:09.985532   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:09.985540   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:09.985648   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:10.028412   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:10.028487   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:10.028521   92925 cri.go:89] found id: ""
	I1213 19:11:10.028549   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:10.028643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.035436   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.040716   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:10.040834   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:10.070216   92925 cri.go:89] found id: ""
	I1213 19:11:10.070245   92925 logs.go:282] 0 containers: []
	W1213 19:11:10.070255   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:10.070261   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:10.070323   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:10.107151   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:10.107174   92925 cri.go:89] found id: ""
	I1213 19:11:10.107183   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:10.107241   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.111700   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:10.111773   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:10.148889   92925 cri.go:89] found id: ""
	I1213 19:11:10.148913   92925 logs.go:282] 0 containers: []
	W1213 19:11:10.148922   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:10.148931   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:10.148946   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:10.183850   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:10.183953   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:10.284535   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:10.284572   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:10.361456   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:10.353378    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.354229    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.355719    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.356209    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.357653    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:10.353378    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.354229    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.355719    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.356209    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.357653    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:10.361521   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:10.361543   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:10.401195   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:10.401230   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:10.466771   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:10.466806   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:10.492988   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:10.493041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:10.506114   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:10.506143   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:10.534614   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:10.534643   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:10.589313   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:10.589346   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:10.621617   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:10.621646   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:13.202940   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:13.214007   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:13.214076   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:13.241311   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:13.241334   92925 cri.go:89] found id: ""
	I1213 19:11:13.241342   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:13.241399   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.244857   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:13.244973   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:13.271246   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:13.271272   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:13.271277   92925 cri.go:89] found id: ""
	I1213 19:11:13.271284   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:13.271368   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.275204   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.278868   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:13.278941   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:13.306334   92925 cri.go:89] found id: ""
	I1213 19:11:13.306365   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.306373   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:13.306379   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:13.306440   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:13.332388   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:13.332407   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:13.332412   92925 cri.go:89] found id: ""
	I1213 19:11:13.332419   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:13.332474   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.336618   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.340235   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:13.340305   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:13.366487   92925 cri.go:89] found id: ""
	I1213 19:11:13.366522   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.366531   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:13.366537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:13.366597   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:13.397475   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:13.397496   92925 cri.go:89] found id: ""
	I1213 19:11:13.397504   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:13.397565   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.401266   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:13.401377   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:13.430168   92925 cri.go:89] found id: ""
	I1213 19:11:13.430196   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.430205   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:13.430221   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:13.430235   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:13.496086   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:13.486609    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.487472    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489304    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489961    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.491916    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:13.486609    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.487472    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489304    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489961    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.491916    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:13.496111   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:13.496124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:13.548378   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:13.548413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:13.601861   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:13.601899   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:13.634165   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:13.634193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:13.662242   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:13.662270   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:13.737810   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:13.737846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:13.770540   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:13.770574   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:13.783830   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:13.783907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:13.810122   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:13.810149   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:13.856452   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:13.856485   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:16.448594   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:16.459829   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:16.459900   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:16.489717   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:16.489737   92925 cri.go:89] found id: ""
	I1213 19:11:16.489745   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:16.489799   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.494205   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:16.494290   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:16.529314   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:16.529336   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:16.529340   92925 cri.go:89] found id: ""
	I1213 19:11:16.529349   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:16.529404   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.533136   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.536814   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:16.536887   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:16.563026   92925 cri.go:89] found id: ""
	I1213 19:11:16.563064   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.563073   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:16.563079   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:16.563139   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:16.594519   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:16.594541   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:16.594546   92925 cri.go:89] found id: ""
	I1213 19:11:16.594554   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:16.594611   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.598288   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.601875   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:16.601946   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:16.628577   92925 cri.go:89] found id: ""
	I1213 19:11:16.628603   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.628612   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:16.628618   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:16.628676   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:16.656978   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:16.657001   92925 cri.go:89] found id: ""
	I1213 19:11:16.657039   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:16.657095   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.661124   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:16.661236   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:16.695697   92925 cri.go:89] found id: ""
	I1213 19:11:16.695731   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.695739   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:16.695748   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:16.695760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:16.766672   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:16.757776    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.758599    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760229    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760563    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.762386    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:16.757776    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.758599    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760229    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760563    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.762386    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:16.766696   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:16.766709   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:16.808187   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:16.808237   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:16.850027   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:16.850062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:16.906135   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:16.906174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:16.935630   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:16.935661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:16.963433   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:16.963463   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:17.045818   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:17.045852   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:17.079053   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:17.079080   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:17.186217   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:17.186251   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:17.198725   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:17.198760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:19.727394   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:19.738364   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:19.738431   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:19.768160   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:19.768183   92925 cri.go:89] found id: ""
	I1213 19:11:19.768196   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:19.768252   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.772004   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:19.772128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:19.799342   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:19.799368   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:19.799374   92925 cri.go:89] found id: ""
	I1213 19:11:19.799382   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:19.799466   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.803455   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.807247   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:19.807340   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:19.835979   92925 cri.go:89] found id: ""
	I1213 19:11:19.836005   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.836014   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:19.836021   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:19.836081   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:19.864302   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:19.864325   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:19.864331   92925 cri.go:89] found id: ""
	I1213 19:11:19.864338   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:19.864397   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.868104   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.871725   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:19.871812   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:19.899890   92925 cri.go:89] found id: ""
	I1213 19:11:19.899919   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.899937   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:19.899944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:19.900012   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:19.927600   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:19.927624   92925 cri.go:89] found id: ""
	I1213 19:11:19.927632   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:19.927685   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.931424   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:19.931509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:19.961424   92925 cri.go:89] found id: ""
	I1213 19:11:19.961454   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.961469   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:19.961479   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:19.961492   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:20.002155   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:20.002284   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:20.082123   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:20.071968    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.072791    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.075159    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.076013    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.077851    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:20.071968    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.072791    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.075159    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.076013    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.077851    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:20.082148   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:20.082162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:20.127578   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:20.127614   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:20.174673   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:20.174713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:20.204713   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:20.204791   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:20.282989   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:20.283026   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:20.327361   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:20.327436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:20.427993   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:20.428032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:20.442295   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:20.442326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:20.471477   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:20.471510   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.025659   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:23.036724   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:23.036796   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:23.064245   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:23.064269   92925 cri.go:89] found id: ""
	I1213 19:11:23.064281   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:23.064341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.068194   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:23.068269   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:23.097592   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:23.097616   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:23.097622   92925 cri.go:89] found id: ""
	I1213 19:11:23.097629   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:23.097692   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.104525   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.110378   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:23.110459   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:23.144932   92925 cri.go:89] found id: ""
	I1213 19:11:23.144958   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.144966   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:23.144972   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:23.145063   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:23.177104   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.177129   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:23.177134   92925 cri.go:89] found id: ""
	I1213 19:11:23.177142   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:23.177197   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.181178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.185904   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:23.185988   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:23.213662   92925 cri.go:89] found id: ""
	I1213 19:11:23.213740   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.213765   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:23.213784   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:23.213891   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:23.244233   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:23.244298   92925 cri.go:89] found id: ""
	I1213 19:11:23.244322   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:23.244413   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.248148   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:23.248228   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:23.276740   92925 cri.go:89] found id: ""
	I1213 19:11:23.276765   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.276773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:23.276784   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:23.276796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.336420   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:23.336453   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:23.368543   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:23.368572   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:23.450730   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:23.450772   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:23.483510   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:23.483550   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:23.628675   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:23.619033    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.620672    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.621438    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623126    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623775    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:23.619033    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.620672    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.621438    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623126    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623775    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:23.628699   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:23.628713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:23.665846   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:23.665882   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:23.713922   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:23.713959   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:23.752354   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:23.752384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:23.858109   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:23.858150   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:23.871373   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:23.871404   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.419535   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:26.430634   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:26.430705   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:26.458628   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:26.458650   92925 cri.go:89] found id: ""
	I1213 19:11:26.458661   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:26.458716   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.462422   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:26.462495   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:26.490349   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.490389   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:26.490394   92925 cri.go:89] found id: ""
	I1213 19:11:26.490401   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:26.490468   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.494405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.498636   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:26.498716   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:26.528607   92925 cri.go:89] found id: ""
	I1213 19:11:26.528637   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.528646   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:26.528653   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:26.528722   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:26.558710   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:26.558733   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:26.558741   92925 cri.go:89] found id: ""
	I1213 19:11:26.558748   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:26.558825   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.562803   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.566707   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:26.566808   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:26.596729   92925 cri.go:89] found id: ""
	I1213 19:11:26.596754   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.596763   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:26.596769   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:26.596826   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:26.624054   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:26.624077   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:26.624083   92925 cri.go:89] found id: ""
	I1213 19:11:26.624090   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:26.624167   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.628449   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.632716   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:26.632822   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:26.659170   92925 cri.go:89] found id: ""
	I1213 19:11:26.659195   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.659204   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:26.659213   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:26.659226   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:26.694272   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:26.694300   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:26.720924   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:26.720959   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:26.751980   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:26.752009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:26.824509   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:26.824547   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:26.855705   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:26.855733   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:26.867403   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:26.867431   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.906787   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:26.906823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:26.951319   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:26.951351   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:27.006541   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:27.006579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:27.033554   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:27.033583   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:27.135230   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:27.135266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:27.210106   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:27.201700    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.202413    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.203893    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.204311    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.205969    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:27.201700    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.202413    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.203893    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.204311    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.205969    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:29.711829   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:29.723531   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:29.723601   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:29.753961   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:29.753984   92925 cri.go:89] found id: ""
	I1213 19:11:29.753992   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:29.754050   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.757806   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:29.757873   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:29.783149   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:29.783181   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:29.783186   92925 cri.go:89] found id: ""
	I1213 19:11:29.783194   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:29.783263   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.787082   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.790979   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:29.791109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:29.817959   92925 cri.go:89] found id: ""
	I1213 19:11:29.817985   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.817994   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:29.818000   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:29.818060   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:29.846235   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:29.846257   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:29.846262   92925 cri.go:89] found id: ""
	I1213 19:11:29.846270   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:29.846351   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.849953   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.853572   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:29.853692   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:29.879800   92925 cri.go:89] found id: ""
	I1213 19:11:29.879834   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.879843   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:29.879850   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:29.879915   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:29.907082   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:29.907116   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:29.907121   92925 cri.go:89] found id: ""
	I1213 19:11:29.907128   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:29.907192   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.910914   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.914566   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:29.914651   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:29.939124   92925 cri.go:89] found id: ""
	I1213 19:11:29.939149   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.939158   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:29.939168   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:29.939205   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:29.981605   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:29.981639   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:30.089079   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:30.089116   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:30.156090   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:30.156124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:30.186549   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:30.186580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:30.214921   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:30.214950   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:30.242668   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:30.242697   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:30.319413   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:30.319445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:30.419178   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:30.419215   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:30.431724   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:30.431753   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:30.501053   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:30.492849    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.493577    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495362    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495976    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.497562    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:30.492849    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.493577    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495362    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495976    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.497562    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:30.501078   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:30.501092   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:30.532550   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:30.532577   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:33.076374   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:33.087831   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:33.087899   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:33.126218   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:33.126241   92925 cri.go:89] found id: ""
	I1213 19:11:33.126251   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:33.126315   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.130647   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:33.130731   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:33.158982   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:33.159013   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:33.159020   92925 cri.go:89] found id: ""
	I1213 19:11:33.159028   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:33.159094   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.162984   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.166562   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:33.166635   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:33.193330   92925 cri.go:89] found id: ""
	I1213 19:11:33.193353   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.193361   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:33.193367   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:33.193423   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:33.221129   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:33.221153   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:33.221159   92925 cri.go:89] found id: ""
	I1213 19:11:33.221166   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:33.221239   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.225797   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.229503   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:33.229615   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:33.257761   92925 cri.go:89] found id: ""
	I1213 19:11:33.257786   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.257795   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:33.257802   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:33.257865   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:33.285915   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:33.285941   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:33.285957   92925 cri.go:89] found id: ""
	I1213 19:11:33.285968   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:33.286026   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.289819   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.293581   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:33.293655   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:33.324324   92925 cri.go:89] found id: ""
	I1213 19:11:33.324348   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.324357   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:33.324366   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:33.324377   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:33.350842   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:33.350913   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:33.424344   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:33.424380   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:33.452897   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:33.452930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:33.504468   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:33.504506   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:33.579150   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:33.579183   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:33.607049   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:33.607076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:33.633297   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:33.633326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:33.668670   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:33.668699   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:33.766904   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:33.766936   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:33.780538   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:33.780567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:33.857253   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:33.848822    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.849778    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851312    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851759    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.853392    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:33.848822    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.849778    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851312    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851759    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.853392    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:33.857275   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:33.857290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.398970   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:36.410341   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:36.410416   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:36.438456   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:36.438479   92925 cri.go:89] found id: ""
	I1213 19:11:36.438488   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:36.438568   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.442320   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:36.442395   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:36.470092   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.470116   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:36.470121   92925 cri.go:89] found id: ""
	I1213 19:11:36.470131   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:36.470218   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.474021   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.477467   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:36.477578   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:36.505647   92925 cri.go:89] found id: ""
	I1213 19:11:36.505670   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.505714   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:36.505733   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:36.505804   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:36.537872   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:36.537895   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:36.537900   92925 cri.go:89] found id: ""
	I1213 19:11:36.537907   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:36.537961   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.541660   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.545244   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:36.545314   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:36.570195   92925 cri.go:89] found id: ""
	I1213 19:11:36.570228   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.570238   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:36.570250   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:36.570339   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:36.595894   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:36.595958   92925 cri.go:89] found id: ""
	I1213 19:11:36.595979   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:36.596064   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.599675   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:36.599789   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:36.624988   92925 cri.go:89] found id: ""
	I1213 19:11:36.625083   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.625101   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:36.625112   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:36.625123   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:36.718891   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:36.718924   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:36.786494   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:36.778476    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.779141    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.780744    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.781242    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.782695    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:36.778476    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.779141    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.780744    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.781242    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.782695    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:36.786519   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:36.786531   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.828295   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:36.828328   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:36.871560   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:36.871591   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:36.941295   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:36.941335   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:37.023869   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:37.023902   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:37.055672   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:37.055700   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:37.069301   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:37.069334   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:37.098989   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:37.099015   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:37.135738   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:37.135771   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:39.664114   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:39.675928   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:39.675999   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:39.702971   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:39.702989   92925 cri.go:89] found id: ""
	I1213 19:11:39.702998   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:39.703053   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.707021   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:39.707096   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:39.733615   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:39.733637   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:39.733642   92925 cri.go:89] found id: ""
	I1213 19:11:39.733663   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:39.733720   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.737520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.740992   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:39.741107   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:39.769090   92925 cri.go:89] found id: ""
	I1213 19:11:39.769174   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.769194   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:39.769201   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:39.769351   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:39.804293   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:39.804314   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:39.804319   92925 cri.go:89] found id: ""
	I1213 19:11:39.804326   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:39.804389   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.808495   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.812181   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:39.812255   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:39.838217   92925 cri.go:89] found id: ""
	I1213 19:11:39.838243   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.838252   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:39.838259   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:39.838314   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:39.866484   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:39.866504   92925 cri.go:89] found id: ""
	I1213 19:11:39.866512   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:39.866567   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.870814   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:39.870885   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:39.908207   92925 cri.go:89] found id: ""
	I1213 19:11:39.908233   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.908243   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:39.908252   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:39.908264   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:39.920472   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:39.920499   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:39.948910   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:39.948951   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:40.012782   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:40.012825   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:40.047267   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:40.047297   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:40.129790   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:40.129871   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:40.168487   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:40.168519   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:40.269381   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:40.269456   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:40.338885   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:40.330165    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.330955    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333137    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333832    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.335154    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:40.330165    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.330955    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333137    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333832    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.335154    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:40.338906   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:40.338919   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:40.394986   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:40.395024   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:40.460751   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:40.460799   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:42.992519   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:43.004031   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:43.004110   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:43.032556   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:43.032578   92925 cri.go:89] found id: ""
	I1213 19:11:43.032586   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:43.032640   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.036332   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:43.036401   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:43.065252   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:43.065282   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:43.065288   92925 cri.go:89] found id: ""
	I1213 19:11:43.065296   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:43.065358   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.070007   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.074047   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:43.074122   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:43.108141   92925 cri.go:89] found id: ""
	I1213 19:11:43.108169   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.108181   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:43.108188   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:43.108248   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:43.139539   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:43.139560   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:43.139566   92925 cri.go:89] found id: ""
	I1213 19:11:43.139574   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:43.139629   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.143534   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.147218   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:43.147292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:43.175751   92925 cri.go:89] found id: ""
	I1213 19:11:43.175825   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.175849   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:43.175868   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:43.175952   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:43.200994   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:43.201062   92925 cri.go:89] found id: ""
	I1213 19:11:43.201072   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:43.201127   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.204988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:43.205128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:43.231895   92925 cri.go:89] found id: ""
	I1213 19:11:43.231922   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.231946   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:43.231955   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:43.231968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:43.272192   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:43.272228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:43.334615   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:43.334650   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:43.366125   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:43.366153   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:43.397225   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:43.397254   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:43.468828   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:43.460439    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.461076    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.462731    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.463290    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.464964    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:43.460439    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.461076    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.462731    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.463290    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.464964    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:43.468856   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:43.468869   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:43.519337   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:43.519376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:43.552934   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:43.552963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:43.636492   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:43.636526   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:43.735496   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:43.735529   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:43.748666   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:43.748693   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:46.276009   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:46.287459   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:46.287539   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:46.315787   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:46.315809   92925 cri.go:89] found id: ""
	I1213 19:11:46.315817   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:46.315881   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.319776   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:46.319870   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:46.349638   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:46.349701   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:46.349721   92925 cri.go:89] found id: ""
	I1213 19:11:46.349737   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:46.349810   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.353770   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.357319   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:46.357391   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:46.387852   92925 cri.go:89] found id: ""
	I1213 19:11:46.387879   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.387888   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:46.387895   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:46.387956   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:46.415327   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:46.415351   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:46.415362   92925 cri.go:89] found id: ""
	I1213 19:11:46.415369   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:46.415425   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.420351   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.423877   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:46.423945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:46.452445   92925 cri.go:89] found id: ""
	I1213 19:11:46.452471   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.452480   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:46.452487   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:46.452543   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:46.488306   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:46.488329   92925 cri.go:89] found id: ""
	I1213 19:11:46.488337   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:46.488393   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.492372   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:46.492477   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:46.531601   92925 cri.go:89] found id: ""
	I1213 19:11:46.531625   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.531635   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:46.531644   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:46.531656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:46.576619   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:46.576653   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:46.637968   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:46.638005   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:46.666074   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:46.666103   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:46.699911   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:46.699988   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:46.741837   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:46.741889   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:46.771703   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:46.771729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:46.848202   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:46.848240   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:46.949628   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:46.949664   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:46.963040   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:46.963071   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:47.045784   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:47.037108    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.038507    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.039621    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.040561    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.042097    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:47.037108    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.038507    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.039621    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.040561    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.042097    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:47.045805   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:47.045818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.573745   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:49.584944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:49.585049   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:49.612421   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.612440   92925 cri.go:89] found id: ""
	I1213 19:11:49.612448   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:49.612503   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.616771   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:49.616842   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:49.644250   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:49.644313   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:49.644342   92925 cri.go:89] found id: ""
	I1213 19:11:49.644365   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:49.644448   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.648357   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.652087   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:49.652211   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:49.678765   92925 cri.go:89] found id: ""
	I1213 19:11:49.678790   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.678798   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:49.678804   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:49.678882   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:49.707013   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:49.707082   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:49.707102   92925 cri.go:89] found id: ""
	I1213 19:11:49.707128   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:49.707219   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.711513   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.715226   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:49.715321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:49.741306   92925 cri.go:89] found id: ""
	I1213 19:11:49.741375   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.741401   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:49.741421   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:49.741505   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:49.768427   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:49.768451   92925 cri.go:89] found id: ""
	I1213 19:11:49.768459   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:49.768517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.772356   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:49.772478   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:49.801564   92925 cri.go:89] found id: ""
	I1213 19:11:49.801633   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.801659   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:49.801687   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:49.801725   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.827233   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:49.827261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:49.884809   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:49.884846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:49.911980   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:49.912011   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:49.938143   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:49.938174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:49.951851   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:49.951880   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:49.992816   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:49.992861   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:50.064112   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:50.064149   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:50.149808   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:50.149847   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:50.182876   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:50.182907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:50.285831   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:50.285868   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:50.357682   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:50.350098    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.350586    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.351793    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.352420    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.354169    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:50.350098    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.350586    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.351793    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.352420    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.354169    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:52.858319   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:52.869473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:52.869548   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:52.897144   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:52.897169   92925 cri.go:89] found id: ""
	I1213 19:11:52.897177   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:52.897234   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.900973   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:52.901074   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:52.928815   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:52.928842   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:52.928847   92925 cri.go:89] found id: ""
	I1213 19:11:52.928855   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:52.928912   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.932785   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.936853   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:52.936928   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:52.963913   92925 cri.go:89] found id: ""
	I1213 19:11:52.963940   92925 logs.go:282] 0 containers: []
	W1213 19:11:52.963949   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:52.963954   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:52.964018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:52.993621   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:52.993685   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:52.993705   92925 cri.go:89] found id: ""
	I1213 19:11:52.993730   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:52.993820   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.997612   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:53.001214   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:53.001293   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:53.032707   92925 cri.go:89] found id: ""
	I1213 19:11:53.032733   92925 logs.go:282] 0 containers: []
	W1213 19:11:53.032742   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:53.032749   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:53.032812   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:53.059757   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:53.059780   92925 cri.go:89] found id: ""
	I1213 19:11:53.059805   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:53.059860   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:53.063600   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:53.063673   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:53.091179   92925 cri.go:89] found id: ""
	I1213 19:11:53.091248   92925 logs.go:282] 0 containers: []
	W1213 19:11:53.091286   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:53.091303   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:53.091316   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:53.123301   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:53.123391   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:53.196598   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:53.196634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:53.227689   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:53.227715   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:53.327870   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:53.327905   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:53.343261   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:53.343290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:53.371058   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:53.371089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:53.418862   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:53.418896   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:53.475787   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:53.475822   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:53.507061   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:53.507090   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:53.584040   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:53.575651    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.576367    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.577874    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.578518    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.580190    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:53.575651    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.576367    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.577874    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.578518    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.580190    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:53.584063   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:53.584076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.124239   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:56.136746   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:56.136818   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:56.165417   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:56.165442   92925 cri.go:89] found id: ""
	I1213 19:11:56.165451   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:56.165513   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.169272   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:56.169348   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:56.198281   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.198304   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:56.198309   92925 cri.go:89] found id: ""
	I1213 19:11:56.198316   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:56.198370   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.202310   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.206597   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:56.206670   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:56.233152   92925 cri.go:89] found id: ""
	I1213 19:11:56.233179   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.233189   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:56.233195   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:56.233259   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:56.263980   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:56.264000   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:56.264005   92925 cri.go:89] found id: ""
	I1213 19:11:56.264013   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:56.264071   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.268409   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.272169   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:56.272245   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:56.307136   92925 cri.go:89] found id: ""
	I1213 19:11:56.307163   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.307173   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:56.307179   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:56.307237   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:56.335595   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:56.335618   92925 cri.go:89] found id: ""
	I1213 19:11:56.335626   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:56.335684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.339317   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:56.339388   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:56.365740   92925 cri.go:89] found id: ""
	I1213 19:11:56.365763   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.365773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:56.365782   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:56.365795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:56.392684   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:56.392715   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.443884   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:56.443916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:56.470931   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:56.471007   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:56.498493   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:56.498569   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:56.594275   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:56.594325   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:56.697865   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:56.697902   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:56.710803   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:56.710833   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:56.774588   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:56.766250    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.767127    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.768759    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.769116    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.770766    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:56.766250    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.767127    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.768759    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.769116    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.770766    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:56.774608   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:56.774621   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:56.822318   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:56.822354   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:56.879404   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:56.879440   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:59.418085   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:59.429523   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:59.429599   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:59.459140   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:59.459164   92925 cri.go:89] found id: ""
	I1213 19:11:59.459173   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:59.459250   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.463131   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:59.463231   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:59.491515   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:59.491539   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:59.491544   92925 cri.go:89] found id: ""
	I1213 19:11:59.491552   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:59.491650   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.495555   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.499043   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:59.499118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:59.542670   92925 cri.go:89] found id: ""
	I1213 19:11:59.542745   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.542771   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:59.542785   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:59.542861   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:59.569926   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:59.569950   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:59.569954   92925 cri.go:89] found id: ""
	I1213 19:11:59.569962   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:59.570030   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.574242   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.578071   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:59.578177   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:59.610686   92925 cri.go:89] found id: ""
	I1213 19:11:59.610714   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.610723   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:59.610729   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:59.610789   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:59.639587   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:59.639641   92925 cri.go:89] found id: ""
	I1213 19:11:59.639659   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:59.639720   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.644316   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:59.644404   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:59.672619   92925 cri.go:89] found id: ""
	I1213 19:11:59.672644   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.672653   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:59.672663   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:59.672684   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:59.700144   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:59.700172   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:59.777808   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:59.777856   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:59.811078   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:59.811111   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:59.910789   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:59.910827   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:59.987053   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:59.975650    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.976469    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.977682    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.978310    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.979849    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:59.975650    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.976469    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.977682    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.978310    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.979849    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:00.003642   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:00.003687   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:00.194711   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:00.194803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:00.357297   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:00.357336   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:00.438487   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:00.438580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:00.454845   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:00.454880   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:00.564592   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:00.564633   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.112543   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:03.123663   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:03.123738   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:03.157514   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:03.157538   92925 cri.go:89] found id: ""
	I1213 19:12:03.157546   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:03.157601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.161756   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:03.161829   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:03.187867   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:03.187887   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:03.187892   92925 cri.go:89] found id: ""
	I1213 19:12:03.187900   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:03.187954   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.191586   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.195089   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:03.195186   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:03.227702   92925 cri.go:89] found id: ""
	I1213 19:12:03.227727   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.227736   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:03.227742   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:03.227802   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:03.254539   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:03.254561   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.254566   92925 cri.go:89] found id: ""
	I1213 19:12:03.254574   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:03.254653   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.258434   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.262232   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:03.262309   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:03.293528   92925 cri.go:89] found id: ""
	I1213 19:12:03.293552   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.293561   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:03.293567   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:03.293627   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:03.324573   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:03.324595   92925 cri.go:89] found id: ""
	I1213 19:12:03.324603   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:03.324655   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.328400   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:03.328469   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:03.354317   92925 cri.go:89] found id: ""
	I1213 19:12:03.354342   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.354351   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:03.354362   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:03.354376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:03.416520   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:03.416559   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.443937   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:03.443966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:03.520631   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:03.520669   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:03.539545   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:03.539575   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:03.609658   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:03.599495    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.600262    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.602170    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604093    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604836    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:03.599495    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.600262    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.602170    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604093    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604836    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:03.609679   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:03.609691   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:03.641994   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:03.642021   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:03.683262   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:03.683296   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:03.711455   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:03.711486   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:03.742963   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:03.742994   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:03.842936   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:03.842971   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.387950   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:06.398757   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:06.398838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:06.427281   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:06.427343   92925 cri.go:89] found id: ""
	I1213 19:12:06.427359   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:06.427424   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.431296   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:06.431370   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:06.458047   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:06.458069   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.458073   92925 cri.go:89] found id: ""
	I1213 19:12:06.458081   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:06.458138   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.461822   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.466010   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:06.466084   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:06.504515   92925 cri.go:89] found id: ""
	I1213 19:12:06.504542   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.504551   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:06.504560   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:06.504621   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:06.541478   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:06.541501   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:06.541506   92925 cri.go:89] found id: ""
	I1213 19:12:06.541514   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:06.541576   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.545645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.549634   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:06.549704   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:06.576630   92925 cri.go:89] found id: ""
	I1213 19:12:06.576698   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.576724   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:06.576744   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:06.576832   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:06.604207   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:06.604229   92925 cri.go:89] found id: ""
	I1213 19:12:06.604237   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:06.604298   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.608117   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:06.608232   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:06.634291   92925 cri.go:89] found id: ""
	I1213 19:12:06.634362   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.634379   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:06.634388   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:06.634402   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.696997   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:06.697085   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:06.756705   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:06.756741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:06.836493   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:06.836525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:06.936663   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:06.936700   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:06.949180   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:06.949212   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:07.020703   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:07.012352    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.013247    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.014825    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.015260    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.016747    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:07.012352    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.013247    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.014825    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.015260    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.016747    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:07.020728   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:07.020741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:07.052354   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:07.052383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:07.079834   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:07.079865   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:07.119690   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:07.119720   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:07.146357   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:07.146385   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:09.686883   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:09.697849   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:09.697924   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:09.724282   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:09.724307   92925 cri.go:89] found id: ""
	I1213 19:12:09.724316   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:09.724374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.727853   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:09.727929   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:09.757294   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:09.757315   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:09.757320   92925 cri.go:89] found id: ""
	I1213 19:12:09.757328   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:09.757383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.761291   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.764680   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:09.764755   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:09.791939   92925 cri.go:89] found id: ""
	I1213 19:12:09.791964   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.791974   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:09.791979   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:09.792059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:09.819349   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:09.819415   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:09.819435   92925 cri.go:89] found id: ""
	I1213 19:12:09.819460   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:09.819540   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.823580   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.827023   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:09.827138   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:09.857888   92925 cri.go:89] found id: ""
	I1213 19:12:09.857966   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.857990   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:09.858001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:09.858066   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:09.884350   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:09.884373   92925 cri.go:89] found id: ""
	I1213 19:12:09.884381   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:09.884438   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.888641   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:09.888720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:09.915592   92925 cri.go:89] found id: ""
	I1213 19:12:09.915614   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.915623   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:09.915632   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:09.915644   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:09.941582   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:09.941614   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:10.002342   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:10.002377   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:10.031301   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:10.031336   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:10.071296   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:10.071332   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:10.123567   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:10.123605   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:10.157428   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:10.157457   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:10.238347   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:10.238426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:10.334563   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:10.334598   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:10.347255   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:10.347286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:10.432160   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:10.423156    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.423973    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.425617    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.426254    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.428070    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:10.423156    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.423973    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.425617    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.426254    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.428070    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:10.432226   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:10.432252   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:12.994728   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:13.005943   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:13.006017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:13.033581   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:13.033602   92925 cri.go:89] found id: ""
	I1213 19:12:13.033610   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:13.033689   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.037439   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:13.037531   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:13.069482   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:13.069506   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:13.069511   92925 cri.go:89] found id: ""
	I1213 19:12:13.069520   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:13.069579   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.073384   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.077179   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:13.077250   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:13.117434   92925 cri.go:89] found id: ""
	I1213 19:12:13.117508   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.117525   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:13.117532   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:13.117603   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:13.151113   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:13.151191   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:13.151211   92925 cri.go:89] found id: ""
	I1213 19:12:13.151235   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:13.151330   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.155305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.159267   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:13.159375   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:13.193156   92925 cri.go:89] found id: ""
	I1213 19:12:13.193183   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.193191   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:13.193197   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:13.193303   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:13.228192   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:13.228272   92925 cri.go:89] found id: ""
	I1213 19:12:13.228304   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:13.228385   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.232149   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:13.232270   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:13.265793   92925 cri.go:89] found id: ""
	I1213 19:12:13.265868   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.265892   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:13.265914   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:13.265974   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:13.298247   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:13.298332   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:13.338944   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:13.338977   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:13.398561   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:13.398600   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:13.426862   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:13.426891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:13.526771   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:13.526807   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:13.539556   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:13.539587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:13.606738   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:13.598805    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.599569    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.600660    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.601348    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.602977    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:13.598805    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.599569    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.600660    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.601348    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.602977    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:13.606761   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:13.606777   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:13.632299   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:13.632367   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:13.681186   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:13.681224   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:13.715711   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:13.715741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:16.289974   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:16.301720   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:16.301794   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:16.333180   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:16.333203   92925 cri.go:89] found id: ""
	I1213 19:12:16.333211   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:16.333262   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.337163   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:16.337233   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:16.366808   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:16.366829   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:16.366834   92925 cri.go:89] found id: ""
	I1213 19:12:16.366841   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:16.366897   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.370643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.374381   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:16.374453   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:16.402639   92925 cri.go:89] found id: ""
	I1213 19:12:16.402663   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.402672   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:16.402678   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:16.402735   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:16.429862   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:16.429927   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:16.429948   92925 cri.go:89] found id: ""
	I1213 19:12:16.429971   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:16.430057   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.437586   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.443620   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:16.443739   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:16.468889   92925 cri.go:89] found id: ""
	I1213 19:12:16.468915   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.468933   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:16.468940   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:16.469002   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:16.497884   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:16.497952   92925 cri.go:89] found id: ""
	I1213 19:12:16.497975   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:16.498065   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.501907   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:16.502017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:16.528833   92925 cri.go:89] found id: ""
	I1213 19:12:16.528861   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.528871   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:16.528880   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:16.528891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:16.571970   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:16.572003   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:16.599399   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:16.599433   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:16.626668   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:16.626698   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:16.657476   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:16.657505   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:16.756171   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:16.756207   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:16.768558   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:16.768587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:16.841002   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:16.841041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:16.913877   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:16.913951   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:17.002296   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:16.981549    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.983800    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.984559    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.987461    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.988234    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:16.981549    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.983800    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.984559    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.987461    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.988234    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:17.002364   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:17.002385   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:17.029940   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:17.029968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.576739   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:19.587975   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:19.588041   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:19.614817   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:19.614840   92925 cri.go:89] found id: ""
	I1213 19:12:19.614848   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:19.614903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.618582   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:19.618679   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:19.651398   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.651419   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:19.651424   92925 cri.go:89] found id: ""
	I1213 19:12:19.651432   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:19.651501   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.655392   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.659059   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:19.659134   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:19.684221   92925 cri.go:89] found id: ""
	I1213 19:12:19.684247   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.684257   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:19.684264   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:19.684323   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:19.711198   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:19.711220   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:19.711226   92925 cri.go:89] found id: ""
	I1213 19:12:19.711233   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:19.711289   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.715680   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.719221   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:19.719292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:19.751237   92925 cri.go:89] found id: ""
	I1213 19:12:19.751286   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.751296   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:19.751303   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:19.751371   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:19.778300   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:19.778321   92925 cri.go:89] found id: ""
	I1213 19:12:19.778330   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:19.778413   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.782520   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:19.782614   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:19.814477   92925 cri.go:89] found id: ""
	I1213 19:12:19.814507   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.814517   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:19.814526   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:19.814558   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.855891   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:19.855922   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:19.917648   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:19.917687   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:19.949548   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:19.949574   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:19.976644   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:19.976680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:20.064988   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:20.065042   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:20.114742   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:20.114776   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:20.220028   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:20.220066   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:20.232673   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:20.232703   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:20.314099   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:20.305597    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.306343    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308133    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308739    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.310382    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:20.305597    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.306343    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308133    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308739    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.310382    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:20.314125   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:20.314142   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:20.358618   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:20.358649   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:22.884692   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:22.896642   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:22.896714   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:22.925894   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:22.925919   92925 cri.go:89] found id: ""
	I1213 19:12:22.925928   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:22.925982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.929556   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:22.929630   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:22.957310   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:22.957375   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:22.957393   92925 cri.go:89] found id: ""
	I1213 19:12:22.957419   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:22.957496   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.961230   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.964927   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:22.965122   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:22.993901   92925 cri.go:89] found id: ""
	I1213 19:12:22.993974   92925 logs.go:282] 0 containers: []
	W1213 19:12:22.994000   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:22.994012   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:22.994092   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:23.021087   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:23.021112   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:23.021117   92925 cri.go:89] found id: ""
	I1213 19:12:23.021123   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:23.021179   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.025414   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.029044   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:23.029147   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:23.054815   92925 cri.go:89] found id: ""
	I1213 19:12:23.054840   92925 logs.go:282] 0 containers: []
	W1213 19:12:23.054848   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:23.054855   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:23.054913   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:23.080286   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:23.080312   92925 cri.go:89] found id: ""
	I1213 19:12:23.080320   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:23.080407   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.084274   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:23.084375   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:23.115727   92925 cri.go:89] found id: ""
	I1213 19:12:23.115750   92925 logs.go:282] 0 containers: []
	W1213 19:12:23.115758   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:23.115767   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:23.115796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:23.194830   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:23.186405    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.187281    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.188756    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.189379    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.191250    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:23.186405    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.187281    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.188756    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.189379    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.191250    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:23.194890   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:23.194911   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:23.234766   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:23.234801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:23.282930   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:23.282966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:23.352028   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:23.352067   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:23.379340   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:23.379418   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:23.425558   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:23.425589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:23.453170   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:23.453198   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:23.484993   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:23.485089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:23.575060   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:23.575093   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:23.676623   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:23.676658   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:26.191200   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:26.202087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:26.202208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:26.237575   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:26.237607   92925 cri.go:89] found id: ""
	I1213 19:12:26.237616   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:26.237685   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.242604   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:26.242726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:26.275657   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:26.275680   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:26.275687   92925 cri.go:89] found id: ""
	I1213 19:12:26.275696   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:26.275774   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.279747   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.283677   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:26.283784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:26.312109   92925 cri.go:89] found id: ""
	I1213 19:12:26.312185   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.312219   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:26.312239   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:26.312329   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:26.342409   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:26.342432   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:26.342437   92925 cri.go:89] found id: ""
	I1213 19:12:26.342445   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:26.342500   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.346485   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.350281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:26.350365   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:26.375751   92925 cri.go:89] found id: ""
	I1213 19:12:26.375775   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.375783   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:26.375790   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:26.375864   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:26.401584   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:26.401607   92925 cri.go:89] found id: ""
	I1213 19:12:26.401614   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:26.401686   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.405294   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:26.405373   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:26.433390   92925 cri.go:89] found id: ""
	I1213 19:12:26.433467   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.433491   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:26.433507   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:26.433533   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:26.493265   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:26.493305   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:26.528279   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:26.528307   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:26.612530   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:26.612565   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:26.625201   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:26.625231   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:26.695921   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:26.686948    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.687827    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.689491    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.690111    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.691852    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:26.686948    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.687827    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.689491    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.690111    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.691852    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:26.695942   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:26.695955   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:26.721367   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:26.721436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:26.747790   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:26.747818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:26.778783   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:26.778813   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:26.875307   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:26.875341   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:26.926065   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:26.926104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.471412   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:29.482208   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:29.482279   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:29.518089   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:29.518111   92925 cri.go:89] found id: ""
	I1213 19:12:29.518120   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:29.518179   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.522151   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:29.522316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:29.550522   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:29.550548   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.550553   92925 cri.go:89] found id: ""
	I1213 19:12:29.550561   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:29.550614   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.554476   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.557855   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:29.557927   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:29.585314   92925 cri.go:89] found id: ""
	I1213 19:12:29.585337   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.585346   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:29.585352   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:29.585415   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:29.613061   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:29.613081   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:29.613087   92925 cri.go:89] found id: ""
	I1213 19:12:29.613094   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:29.613149   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.617383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.621127   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:29.621198   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:29.648388   92925 cri.go:89] found id: ""
	I1213 19:12:29.648415   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.648425   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:29.648434   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:29.648493   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:29.675800   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:29.675823   92925 cri.go:89] found id: ""
	I1213 19:12:29.675832   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:29.675885   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.679891   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:29.679964   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:29.708415   92925 cri.go:89] found id: ""
	I1213 19:12:29.708439   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.708447   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:29.708457   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:29.708469   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:29.747281   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:29.747357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.791340   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:29.791374   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:29.834406   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:29.834436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:29.861132   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:29.861162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:29.962754   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:29.962831   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:29.975698   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:29.975725   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:30.136167   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:30.136206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:30.219391   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:30.219426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:30.250060   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:30.250090   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:30.324085   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:30.315913    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.316779    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318083    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318787    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.320486    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:30.315913    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.316779    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318083    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318787    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.320486    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:30.324108   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:30.324122   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:32.849129   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:32.861076   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:32.861146   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:32.890816   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:32.890837   92925 cri.go:89] found id: ""
	I1213 19:12:32.890845   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:32.890899   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.894607   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:32.894684   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:32.925830   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:32.925856   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:32.925861   92925 cri.go:89] found id: ""
	I1213 19:12:32.925868   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:32.925921   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.929582   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.932913   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:32.932983   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:32.959171   92925 cri.go:89] found id: ""
	I1213 19:12:32.959199   92925 logs.go:282] 0 containers: []
	W1213 19:12:32.959208   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:32.959214   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:32.959319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:32.993282   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:32.993309   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:32.993315   92925 cri.go:89] found id: ""
	I1213 19:12:32.993331   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:32.993393   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.997923   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:33.002009   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:33.002111   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:33.029187   92925 cri.go:89] found id: ""
	I1213 19:12:33.029210   92925 logs.go:282] 0 containers: []
	W1213 19:12:33.029219   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:33.029225   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:33.029333   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:33.057252   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:33.057287   92925 cri.go:89] found id: ""
	I1213 19:12:33.057296   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:33.057360   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:33.061234   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:33.061340   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:33.089861   92925 cri.go:89] found id: ""
	I1213 19:12:33.089889   92925 logs.go:282] 0 containers: []
	W1213 19:12:33.089898   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:33.089907   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:33.089919   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:33.108679   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:33.108710   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:33.162722   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:33.162768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:33.227823   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:33.227861   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:33.260183   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:33.260210   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:33.286847   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:33.286872   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:33.368228   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:33.368263   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:33.475747   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:33.475786   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:33.554192   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:33.546124    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.546992    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.548557    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.549128    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.550628    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:33.546124    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.546992    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.548557    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.549128    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.550628    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:33.554212   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:33.554225   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:33.579823   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:33.579850   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:33.623777   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:33.623815   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:36.157314   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:36.168502   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:36.168576   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:36.196421   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:36.196442   92925 cri.go:89] found id: ""
	I1213 19:12:36.196451   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:36.196511   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.200568   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:36.200636   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:36.227300   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:36.227324   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:36.227331   92925 cri.go:89] found id: ""
	I1213 19:12:36.227338   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:36.227396   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.231459   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.235239   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:36.235316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:36.268611   92925 cri.go:89] found id: ""
	I1213 19:12:36.268635   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.268644   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:36.268650   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:36.268731   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:36.308479   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:36.308576   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:36.308597   92925 cri.go:89] found id: ""
	I1213 19:12:36.308642   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:36.308738   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.312547   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.316077   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:36.316189   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:36.342346   92925 cri.go:89] found id: ""
	I1213 19:12:36.342382   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.342392   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:36.342414   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:36.342496   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:36.368808   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:36.368834   92925 cri.go:89] found id: ""
	I1213 19:12:36.368844   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:36.368899   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.372705   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:36.372790   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:36.399760   92925 cri.go:89] found id: ""
	I1213 19:12:36.399796   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.399805   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:36.399817   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:36.399829   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:36.497016   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:36.497097   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:36.511432   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:36.511552   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:36.587222   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:36.577960    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.578711    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.580805    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.581572    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.583427    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:36.577960    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.578711    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.580805    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.581572    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.583427    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:36.587247   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:36.587262   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:36.630739   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:36.630774   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:36.683440   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:36.683473   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:36.751190   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:36.751241   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:36.779744   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:36.779833   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:36.806180   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:36.806206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:36.832449   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:36.832475   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:36.910859   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:36.910900   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:39.441151   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:39.452365   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:39.452439   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:39.484411   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:39.484436   92925 cri.go:89] found id: ""
	I1213 19:12:39.484444   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:39.484499   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.488316   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:39.488390   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:39.519236   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:39.519263   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:39.519268   92925 cri.go:89] found id: ""
	I1213 19:12:39.519277   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:39.519331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.523340   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.529308   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:39.529377   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:39.559339   92925 cri.go:89] found id: ""
	I1213 19:12:39.559405   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.559437   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:39.559456   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:39.559543   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:39.589737   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:39.589769   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:39.589775   92925 cri.go:89] found id: ""
	I1213 19:12:39.589783   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:39.589848   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.593976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.598330   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:39.598421   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:39.631670   92925 cri.go:89] found id: ""
	I1213 19:12:39.631699   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.631708   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:39.631714   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:39.631783   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:39.662738   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:39.662803   92925 cri.go:89] found id: ""
	I1213 19:12:39.662824   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:39.662906   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.666773   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:39.666867   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:39.695600   92925 cri.go:89] found id: ""
	I1213 19:12:39.695627   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.695637   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:39.695646   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:39.695658   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:39.787866   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:39.787904   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:39.864556   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:39.853140    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.856488    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.857226    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.858708    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.859314    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:39.853140    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.856488    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.857226    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.858708    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.859314    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:39.864580   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:39.864594   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:39.893552   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:39.893593   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:39.935040   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:39.935070   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:39.977962   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:39.977992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:40.052674   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:40.052713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:40.145597   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:40.145709   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:40.181340   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:40.181368   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:40.194929   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:40.194999   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:40.222595   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:40.222665   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:42.749068   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:42.760019   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:42.760098   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:42.790868   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:42.790891   92925 cri.go:89] found id: ""
	I1213 19:12:42.790898   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:42.790953   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.794682   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:42.794770   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:42.823001   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:42.823024   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:42.823029   92925 cri.go:89] found id: ""
	I1213 19:12:42.823036   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:42.823102   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.826966   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.830581   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:42.830667   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:42.857298   92925 cri.go:89] found id: ""
	I1213 19:12:42.857325   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.857334   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:42.857340   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:42.857402   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:42.888499   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:42.888524   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:42.888528   92925 cri.go:89] found id: ""
	I1213 19:12:42.888535   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:42.888601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.894724   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.898823   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:42.898944   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:42.925225   92925 cri.go:89] found id: ""
	I1213 19:12:42.925262   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.925271   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:42.925277   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:42.925363   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:42.954151   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:42.954186   92925 cri.go:89] found id: ""
	I1213 19:12:42.954195   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:42.954262   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.958191   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:42.958256   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:42.997632   92925 cri.go:89] found id: ""
	I1213 19:12:42.997699   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.997722   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:42.997738   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:42.997750   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:43.044934   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:43.044968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:43.130707   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:43.130787   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:43.162064   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:43.162196   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:43.174781   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:43.174807   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:43.248282   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:43.239057    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.239785    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.241456    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.242060    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.243778    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:43.239057    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.239785    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.241456    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.242060    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.243778    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:43.248309   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:43.248322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:43.292697   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:43.292729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:43.326878   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:43.326906   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:43.402321   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:43.402356   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:43.434630   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:43.434662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:43.547901   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:43.547940   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.074896   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:46.086088   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:46.086156   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:46.138954   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.138977   92925 cri.go:89] found id: ""
	I1213 19:12:46.138985   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:46.139041   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.142934   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:46.143008   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:46.167983   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:46.168008   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:46.168014   92925 cri.go:89] found id: ""
	I1213 19:12:46.168022   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:46.168083   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.172203   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.176085   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:46.176164   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:46.206474   92925 cri.go:89] found id: ""
	I1213 19:12:46.206501   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.206509   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:46.206515   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:46.206572   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:46.232990   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:46.233047   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:46.233052   92925 cri.go:89] found id: ""
	I1213 19:12:46.233059   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:46.233121   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.236960   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.241098   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:46.241171   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:46.277846   92925 cri.go:89] found id: ""
	I1213 19:12:46.277872   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.277881   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:46.277886   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:46.277945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:46.306293   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:46.306316   92925 cri.go:89] found id: ""
	I1213 19:12:46.306324   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:46.306383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.310146   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:46.310220   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:46.337703   92925 cri.go:89] found id: ""
	I1213 19:12:46.337728   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.337737   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:46.337746   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:46.337757   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:46.433354   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:46.433391   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:46.446062   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:46.446089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.474866   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:46.474894   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:46.518894   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:46.518972   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:46.584190   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:46.584221   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:46.612728   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:46.612798   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:46.693365   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:46.693401   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:46.730005   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:46.730036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:46.805821   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:46.797250    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.797857    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799401    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799906    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.801867    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:46.797250    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.797857    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799401    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799906    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.801867    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:46.805844   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:46.805858   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:46.849142   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:46.849180   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.377325   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:49.388007   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:49.388073   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:49.414745   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:49.414768   92925 cri.go:89] found id: ""
	I1213 19:12:49.414777   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:49.414831   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.418502   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:49.418579   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:49.443751   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:49.443772   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:49.443777   92925 cri.go:89] found id: ""
	I1213 19:12:49.443784   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:49.443864   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.447524   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.450957   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:49.451025   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:49.478284   92925 cri.go:89] found id: ""
	I1213 19:12:49.478309   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.478318   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:49.478324   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:49.478383   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:49.506581   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:49.506604   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:49.506609   92925 cri.go:89] found id: ""
	I1213 19:12:49.506617   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:49.506673   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.513976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.518489   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:49.518567   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:49.545961   92925 cri.go:89] found id: ""
	I1213 19:12:49.545986   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.545995   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:49.546001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:49.546072   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:49.579946   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.579974   92925 cri.go:89] found id: ""
	I1213 19:12:49.579983   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:49.580036   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.583648   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:49.583726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:49.610201   92925 cri.go:89] found id: ""
	I1213 19:12:49.610278   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.610294   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:49.610304   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:49.610321   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:49.682958   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:49.682995   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:49.716028   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:49.716058   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:49.744220   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:49.744248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:49.783347   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:49.783379   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:49.826736   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:49.826770   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:49.860737   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:49.860767   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.894176   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:49.894206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:49.978486   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:49.978525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:50.088530   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:50.088567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:50.107858   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:50.107886   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:50.186950   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:50.178748    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.179306    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.180827    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.181343    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.182902    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:50.178748    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.179306    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.180827    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.181343    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.182902    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:52.687879   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:52.700111   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:52.700185   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:52.727611   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:52.727635   92925 cri.go:89] found id: ""
	I1213 19:12:52.727643   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:52.727699   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.732611   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:52.732683   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:52.760331   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:52.760355   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:52.760361   92925 cri.go:89] found id: ""
	I1213 19:12:52.760369   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:52.760424   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.764203   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.767807   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:52.767880   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:52.794453   92925 cri.go:89] found id: ""
	I1213 19:12:52.794528   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.794552   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:52.794571   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:52.794662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:52.824938   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:52.825046   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:52.825077   92925 cri.go:89] found id: ""
	I1213 19:12:52.825108   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:52.825170   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.828865   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.832644   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:52.832718   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:52.860489   92925 cri.go:89] found id: ""
	I1213 19:12:52.860512   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.860521   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:52.860527   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:52.860588   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:52.886828   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:52.886862   92925 cri.go:89] found id: ""
	I1213 19:12:52.886872   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:52.886940   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.890986   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:52.891106   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:52.917681   92925 cri.go:89] found id: ""
	I1213 19:12:52.917749   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.917776   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:52.917799   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:52.917837   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:52.948506   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:52.948535   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:52.977936   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:52.977963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:53.041212   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:53.041249   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:53.080162   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:53.080189   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:53.174852   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:53.174897   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:53.273766   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:53.273802   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:53.285893   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:53.285925   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:53.352966   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:53.343677    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345158    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345928    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347424    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347925    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:53.343677    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345158    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345928    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347424    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347925    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:53.352990   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:53.353032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:53.391432   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:53.391464   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:53.451329   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:53.451363   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:55.977809   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:55.993375   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:55.993492   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:56.026972   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:56.026993   92925 cri.go:89] found id: ""
	I1213 19:12:56.027001   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:56.027059   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.031128   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:56.031204   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:56.058936   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:56.058958   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:56.058963   92925 cri.go:89] found id: ""
	I1213 19:12:56.058971   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:56.059024   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.062862   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.066757   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:56.066858   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:56.096088   92925 cri.go:89] found id: ""
	I1213 19:12:56.096112   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.096121   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:56.096134   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:56.096196   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:56.138653   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:56.138678   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:56.138683   92925 cri.go:89] found id: ""
	I1213 19:12:56.138691   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:56.138748   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.142767   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.146336   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:56.146413   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:56.176996   92925 cri.go:89] found id: ""
	I1213 19:12:56.177098   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.177115   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:56.177122   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:56.177191   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:56.206318   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:56.206341   92925 cri.go:89] found id: ""
	I1213 19:12:56.206350   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:56.206405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.210085   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:56.210208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:56.240242   92925 cri.go:89] found id: ""
	I1213 19:12:56.240269   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.240278   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:56.240287   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:56.240299   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:56.268772   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:56.268800   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:56.282265   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:56.282293   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:56.334697   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:56.334731   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:56.419986   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:56.420074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:56.466391   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:56.466421   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:56.578289   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:56.578327   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:56.657266   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:56.648227    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.649364    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.650885    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.651401    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.653076    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:56.648227    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.649364    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.650885    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.651401    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.653076    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:56.657289   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:56.657302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:56.685603   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:56.685631   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:56.732451   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:56.732487   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:56.807034   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:56.807068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:59.335877   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:59.346983   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:59.347053   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:59.375213   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:59.375241   92925 cri.go:89] found id: ""
	I1213 19:12:59.375250   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:59.375308   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.379246   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:59.379319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:59.406052   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:59.406073   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:59.406078   92925 cri.go:89] found id: ""
	I1213 19:12:59.406085   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:59.406142   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.409969   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.413744   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:59.413813   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:59.440031   92925 cri.go:89] found id: ""
	I1213 19:12:59.440057   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.440066   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:59.440072   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:59.440131   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:59.470750   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:59.470770   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:59.470775   92925 cri.go:89] found id: ""
	I1213 19:12:59.470782   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:59.470836   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.474671   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.478148   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:59.478230   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:59.532301   92925 cri.go:89] found id: ""
	I1213 19:12:59.532334   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.532344   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:59.532350   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:59.532423   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:59.558719   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:59.558742   92925 cri.go:89] found id: ""
	I1213 19:12:59.558750   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:59.558814   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.562460   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:59.562534   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:59.588851   92925 cri.go:89] found id: ""
	I1213 19:12:59.588916   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.588942   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:59.588964   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:59.589031   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:59.665993   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:59.666032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:59.712805   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:59.712839   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:59.725635   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:59.725688   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:59.797796   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:59.790093    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.790845    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.791906    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.792472    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.794170    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:59.790093    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.790845    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.791906    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.792472    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.794170    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:59.797819   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:59.797831   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:59.825855   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:59.825886   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:59.864251   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:59.864286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:59.890125   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:59.890151   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:59.981337   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:59.981387   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:00.239751   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:00.239799   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:00.366187   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:00.368005   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:02.909028   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:02.919617   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:02.919732   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:02.946548   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:02.946613   92925 cri.go:89] found id: ""
	I1213 19:13:02.946629   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:02.946696   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.950448   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:02.950542   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:02.975550   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:02.975572   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:02.975577   92925 cri.go:89] found id: ""
	I1213 19:13:02.975585   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:02.975645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.979406   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.984704   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:02.984818   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:03.017288   92925 cri.go:89] found id: ""
	I1213 19:13:03.017311   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.017320   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:03.017334   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:03.017393   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:03.048824   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:03.048850   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:03.048857   92925 cri.go:89] found id: ""
	I1213 19:13:03.048864   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:03.048919   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.052630   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.056397   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:03.056521   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:03.088050   92925 cri.go:89] found id: ""
	I1213 19:13:03.088123   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.088146   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:03.088165   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:03.088271   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:03.119709   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:03.119778   92925 cri.go:89] found id: ""
	I1213 19:13:03.119801   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:03.119889   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.127122   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:03.127274   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:03.162913   92925 cri.go:89] found id: ""
	I1213 19:13:03.162936   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.162945   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:03.162953   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:03.162966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:03.207543   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:03.207579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:03.279537   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:03.279575   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:03.314034   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:03.314062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:03.394532   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:03.394567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:03.428318   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:03.428351   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:03.528148   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:03.528187   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:03.626750   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:03.618493    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.619154    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.620764    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.621367    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.622889    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:03.618493    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.619154    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.620764    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.621367    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.622889    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:03.626775   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:03.626788   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:03.685480   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:03.685519   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:03.713856   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:03.713883   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:03.734590   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:03.734620   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:06.266879   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:06.277733   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:06.277799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:06.305175   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:06.305196   92925 cri.go:89] found id: ""
	I1213 19:13:06.305204   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:06.305258   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.308850   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:06.308928   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:06.335153   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:06.335177   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:06.335182   92925 cri.go:89] found id: ""
	I1213 19:13:06.335189   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:06.335246   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.338903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.342418   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:06.342493   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:06.372604   92925 cri.go:89] found id: ""
	I1213 19:13:06.372632   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.372641   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:06.372646   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:06.372707   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:06.402642   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:06.402670   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:06.402675   92925 cri.go:89] found id: ""
	I1213 19:13:06.402682   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:06.402740   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.406787   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.411254   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:06.411335   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:06.437659   92925 cri.go:89] found id: ""
	I1213 19:13:06.437736   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.437751   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:06.437758   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:06.437829   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:06.466702   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:06.466725   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:06.466730   92925 cri.go:89] found id: ""
	I1213 19:13:06.466737   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:06.466793   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.470567   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.474150   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:06.474224   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:06.501494   92925 cri.go:89] found id: ""
	I1213 19:13:06.501569   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.501594   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:06.501617   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:06.501662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:06.544779   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:06.544813   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:06.609379   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:06.609413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:06.637668   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:06.637698   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:06.664078   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:06.664105   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:06.709192   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:06.709225   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:06.737814   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:06.737845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:06.810267   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:06.810302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:06.841843   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:06.841871   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:06.938739   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:06.938776   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:06.951386   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:06.951414   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:07.032986   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:07.025075    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.025642    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027282    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027955    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.029566    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:07.025075    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.025642    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027282    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027955    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.029566    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:07.033040   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:07.033053   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:09.558493   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:09.570604   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:09.570681   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:09.598108   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:09.598133   92925 cri.go:89] found id: ""
	I1213 19:13:09.598141   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:09.598197   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.602596   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:09.602673   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:09.629705   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:09.629727   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:09.629733   92925 cri.go:89] found id: ""
	I1213 19:13:09.629741   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:09.629798   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.634280   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.637817   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:09.637895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:09.665414   92925 cri.go:89] found id: ""
	I1213 19:13:09.665438   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.665447   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:09.665453   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:09.665509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:09.691729   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:09.691754   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:09.691759   92925 cri.go:89] found id: ""
	I1213 19:13:09.691766   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:09.691850   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.696064   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.700204   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:09.700308   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:09.732154   92925 cri.go:89] found id: ""
	I1213 19:13:09.732181   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.732190   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:09.732196   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:09.732277   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:09.760821   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:09.760844   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:09.760849   92925 cri.go:89] found id: ""
	I1213 19:13:09.760856   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:09.760918   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.764697   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.768225   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:09.768299   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:09.796678   92925 cri.go:89] found id: ""
	I1213 19:13:09.796748   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.796773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:09.796797   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:09.796844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:09.892500   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:09.892536   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:09.905527   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:09.905557   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:09.964751   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:09.964785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:10.026858   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:10.026896   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:10.095709   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:10.095747   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:10.135797   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:10.135834   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:10.207467   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:10.198321    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.199090    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.200887    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.201755    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.202624    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:10.198321    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.199090    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.200887    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.201755    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.202624    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:10.207502   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:10.207515   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:10.233202   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:10.233298   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:10.259818   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:10.259845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:10.286455   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:10.286482   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:10.359430   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:10.359465   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:12.894266   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:12.905675   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:12.905773   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:12.932239   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:12.932259   92925 cri.go:89] found id: ""
	I1213 19:13:12.932267   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:12.932320   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.935869   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:12.935938   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:12.961758   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:12.961778   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:12.961782   92925 cri.go:89] found id: ""
	I1213 19:13:12.961789   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:12.961846   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.965449   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.968967   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:12.969071   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:13.001173   92925 cri.go:89] found id: ""
	I1213 19:13:13.001203   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.001213   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:13.001219   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:13.001333   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:13.029728   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:13.029751   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:13.029756   92925 cri.go:89] found id: ""
	I1213 19:13:13.029764   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:13.029818   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.033632   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.037474   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:13.037598   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:13.064000   92925 cri.go:89] found id: ""
	I1213 19:13:13.064025   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.064034   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:13.064040   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:13.064151   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:13.092827   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:13.092847   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:13.092852   92925 cri.go:89] found id: ""
	I1213 19:13:13.092859   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:13.092913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.097637   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.102128   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:13.102195   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:13.132820   92925 cri.go:89] found id: ""
	I1213 19:13:13.132891   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.132912   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:13.132934   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:13.132976   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:13.200851   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:13.200889   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:13.232573   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:13.232603   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:13.325521   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:13.325556   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:13.338293   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:13.338324   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:13.369921   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:13.369950   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:13.416445   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:13.416477   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:13.443214   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:13.443243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:13.468415   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:13.468448   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:13.553200   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:13.553248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:13.596683   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:13.596717   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:13.678127   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:13.669907    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.670748    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672392    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672709    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.674262    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:13.669907    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.670748    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672392    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672709    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.674262    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:13.678150   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:13.678167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.227377   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:16.238613   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:16.238685   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:16.271628   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:16.271652   92925 cri.go:89] found id: ""
	I1213 19:13:16.271661   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:16.271717   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.275571   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:16.275645   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:16.304819   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:16.304843   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.304848   92925 cri.go:89] found id: ""
	I1213 19:13:16.304856   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:16.304911   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.308802   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.312668   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:16.312741   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:16.347113   92925 cri.go:89] found id: ""
	I1213 19:13:16.347137   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.347146   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:16.347153   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:16.347209   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:16.380339   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:16.380362   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:16.380368   92925 cri.go:89] found id: ""
	I1213 19:13:16.380376   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:16.380433   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.383986   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.387756   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:16.387876   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:16.419309   92925 cri.go:89] found id: ""
	I1213 19:13:16.419344   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.419353   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:16.419359   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:16.419427   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:16.447987   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:16.448019   92925 cri.go:89] found id: ""
	I1213 19:13:16.448028   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:16.448093   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.452467   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:16.452551   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:16.478206   92925 cri.go:89] found id: ""
	I1213 19:13:16.478271   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.478298   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:16.478319   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:16.478361   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:16.505859   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:16.505891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:16.547050   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:16.547085   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.591041   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:16.591074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:16.659418   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:16.659502   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:16.686174   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:16.686202   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:16.763753   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:16.763792   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:16.795967   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:16.795996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:16.909202   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:16.909246   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:16.921936   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:16.921962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:16.996415   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:16.987820    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.988740    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990501    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990844    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.992387    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:16.987820    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.988740    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990501    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990844    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.992387    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:16.996438   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:16.996452   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:19.525182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:19.536170   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:19.536246   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:19.563344   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:19.563368   92925 cri.go:89] found id: ""
	I1213 19:13:19.563377   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:19.563432   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.567191   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:19.567263   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:19.594906   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:19.594926   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:19.594936   92925 cri.go:89] found id: ""
	I1213 19:13:19.594944   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:19.595012   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.599420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.603163   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:19.603240   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:19.636656   92925 cri.go:89] found id: ""
	I1213 19:13:19.636681   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.636690   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:19.636696   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:19.636753   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:19.667204   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:19.667274   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:19.667292   92925 cri.go:89] found id: ""
	I1213 19:13:19.667316   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:19.667395   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.671184   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.674972   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:19.675041   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:19.704947   92925 cri.go:89] found id: ""
	I1213 19:13:19.704971   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.704980   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:19.704988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:19.705073   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:19.730669   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:19.730691   92925 cri.go:89] found id: ""
	I1213 19:13:19.730699   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:19.730771   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.735384   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:19.735477   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:19.760611   92925 cri.go:89] found id: ""
	I1213 19:13:19.760634   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.760643   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:19.760669   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:19.760686   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:19.788592   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:19.788621   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:19.882694   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:19.882730   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:19.954514   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:19.946675    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.947253    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.948589    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.949210    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.950900    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:19.946675    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.947253    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.948589    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.949210    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.950900    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:19.954535   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:19.954550   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:19.980616   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:19.980694   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:20.035895   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:20.035930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:20.104716   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:20.104768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:20.199665   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:20.199701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:20.234652   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:20.234680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:20.248416   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:20.248444   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:20.296588   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:20.296624   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:22.824017   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:22.838193   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:22.838267   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:22.874481   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:22.874503   92925 cri.go:89] found id: ""
	I1213 19:13:22.874512   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:22.874578   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.878378   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:22.878467   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:22.907053   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:22.907075   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:22.907079   92925 cri.go:89] found id: ""
	I1213 19:13:22.907086   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:22.907143   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.911144   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.914933   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:22.915007   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:22.942646   92925 cri.go:89] found id: ""
	I1213 19:13:22.942714   92925 logs.go:282] 0 containers: []
	W1213 19:13:22.942729   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:22.942736   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:22.942797   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:22.969713   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:22.969735   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:22.969740   92925 cri.go:89] found id: ""
	I1213 19:13:22.969748   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:22.969804   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.973708   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.977426   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:22.977514   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:23.007912   92925 cri.go:89] found id: ""
	I1213 19:13:23.007939   92925 logs.go:282] 0 containers: []
	W1213 19:13:23.007948   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:23.007955   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:23.008018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:23.040260   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:23.040284   92925 cri.go:89] found id: ""
	I1213 19:13:23.040293   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:23.040348   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:23.044273   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:23.044348   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:23.073414   92925 cri.go:89] found id: ""
	I1213 19:13:23.073445   92925 logs.go:282] 0 containers: []
	W1213 19:13:23.073454   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:23.073466   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:23.073478   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:23.147486   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:23.147526   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:23.180397   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:23.180426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:23.262279   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:23.253482    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.254529    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.255324    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.256834    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.257439    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:23.253482    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.254529    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.255324    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.256834    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.257439    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:23.262302   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:23.262318   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:23.288912   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:23.288942   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:23.328328   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:23.328366   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:23.421984   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:23.422020   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:23.524961   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:23.524997   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:23.542790   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:23.542821   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:23.591486   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:23.591522   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:23.621748   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:23.621777   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.152673   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:26.164673   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:26.164740   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:26.192010   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:26.192031   92925 cri.go:89] found id: ""
	I1213 19:13:26.192040   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:26.192095   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.195849   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:26.195918   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:26.224593   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:26.224657   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:26.224677   92925 cri.go:89] found id: ""
	I1213 19:13:26.224702   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:26.224772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.228545   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.231970   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:26.232086   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:26.259044   92925 cri.go:89] found id: ""
	I1213 19:13:26.259066   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.259075   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:26.259080   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:26.259137   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:26.287771   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:26.287793   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:26.287798   92925 cri.go:89] found id: ""
	I1213 19:13:26.287805   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:26.287861   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.293156   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.296722   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:26.296805   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:26.323701   92925 cri.go:89] found id: ""
	I1213 19:13:26.323731   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.323746   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:26.323753   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:26.323820   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:26.350119   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.350137   92925 cri.go:89] found id: ""
	I1213 19:13:26.350145   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:26.350199   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.353849   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:26.353916   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:26.380009   92925 cri.go:89] found id: ""
	I1213 19:13:26.380035   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.380044   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:26.380053   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:26.380065   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:26.438029   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:26.438062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:26.475066   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:26.475096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:26.507857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:26.507887   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:26.521466   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:26.521493   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:26.565942   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:26.565983   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:26.634647   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:26.634680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.662943   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:26.662972   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:26.737712   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:26.737749   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:26.840754   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:26.840792   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:26.911511   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:26.903881    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.904637    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906164    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906441    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.907906    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:26.903881    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.904637    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906164    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906441    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.907906    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:26.911534   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:26.911547   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.438403   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:29.449664   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:29.449742   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:29.477323   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.477342   92925 cri.go:89] found id: ""
	I1213 19:13:29.477351   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:29.477405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.480946   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:29.481052   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:29.515446   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:29.515469   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:29.515473   92925 cri.go:89] found id: ""
	I1213 19:13:29.515480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:29.515537   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.520209   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.523894   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:29.523994   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:29.550207   92925 cri.go:89] found id: ""
	I1213 19:13:29.550232   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.550242   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:29.550272   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:29.550349   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:29.576154   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:29.576177   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:29.576182   92925 cri.go:89] found id: ""
	I1213 19:13:29.576195   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:29.576267   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.580154   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.583801   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:29.583876   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:29.613771   92925 cri.go:89] found id: ""
	I1213 19:13:29.613795   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.613805   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:29.613810   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:29.613872   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:29.640080   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:29.640103   92925 cri.go:89] found id: ""
	I1213 19:13:29.640112   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:29.640167   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.643810   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:29.643883   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:29.674496   92925 cri.go:89] found id: ""
	I1213 19:13:29.674567   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.674583   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:29.674592   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:29.674616   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.704354   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:29.704383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:29.760688   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:29.760724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:29.789616   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:29.789644   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:29.817300   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:29.817328   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:29.848838   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:29.848866   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:29.949492   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:29.949527   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:30.081487   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:30.081528   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:30.170948   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:30.170989   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:30.251666   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:30.251705   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:30.265404   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:30.265433   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:30.340984   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:30.332491    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.333283    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335347    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335760    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.337330    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:30.332491    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.333283    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335347    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335760    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.337330    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:32.841244   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:32.851830   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:32.851904   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:32.878262   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:32.878282   92925 cri.go:89] found id: ""
	I1213 19:13:32.878290   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:32.878345   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.881794   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:32.881871   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:32.908784   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:32.908807   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:32.908812   92925 cri.go:89] found id: ""
	I1213 19:13:32.908819   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:32.908877   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.913113   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.916615   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:32.916713   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:32.945436   92925 cri.go:89] found id: ""
	I1213 19:13:32.945460   92925 logs.go:282] 0 containers: []
	W1213 19:13:32.945468   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:32.945474   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:32.945532   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:32.972389   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:32.972409   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:32.972414   92925 cri.go:89] found id: ""
	I1213 19:13:32.972421   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:32.972496   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.976105   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.979491   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:32.979558   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:33.013568   92925 cri.go:89] found id: ""
	I1213 19:13:33.013590   92925 logs.go:282] 0 containers: []
	W1213 19:13:33.013598   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:33.013604   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:33.013662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:33.041534   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:33.041557   92925 cri.go:89] found id: ""
	I1213 19:13:33.041566   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:33.041622   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:33.045294   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:33.045445   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:33.074126   92925 cri.go:89] found id: ""
	I1213 19:13:33.074196   92925 logs.go:282] 0 containers: []
	W1213 19:13:33.074224   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:33.074248   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:33.074274   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:33.108085   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:33.108112   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:33.196053   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:33.196096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:33.238729   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:33.238801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:33.334220   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:33.334258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:33.347401   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:33.347431   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:33.415328   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:33.415362   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:33.444593   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:33.444672   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:33.519042   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:33.509468    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.510273    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.511953    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.512620    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.513636    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:33.509468    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.510273    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.511953    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.512620    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.513636    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:33.519066   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:33.519078   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:33.546564   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:33.546593   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:33.588382   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:33.588418   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.135267   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:36.146588   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:36.146662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:36.173719   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:36.173741   92925 cri.go:89] found id: ""
	I1213 19:13:36.173750   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:36.173821   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.177610   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:36.177680   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:36.204513   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:36.204536   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.204540   92925 cri.go:89] found id: ""
	I1213 19:13:36.204548   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:36.204602   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.208516   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.211831   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:36.211901   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:36.243167   92925 cri.go:89] found id: ""
	I1213 19:13:36.243194   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.243205   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:36.243211   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:36.243271   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:36.272787   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:36.272812   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:36.272817   92925 cri.go:89] found id: ""
	I1213 19:13:36.272825   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:36.272880   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.276627   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.280060   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:36.280182   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:36.309203   92925 cri.go:89] found id: ""
	I1213 19:13:36.309231   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.309242   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:36.309248   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:36.309310   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:36.342531   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:36.342554   92925 cri.go:89] found id: ""
	I1213 19:13:36.342563   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:36.342631   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.346318   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:36.346392   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:36.374406   92925 cri.go:89] found id: ""
	I1213 19:13:36.374442   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.374467   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:36.374485   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:36.374497   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:36.474302   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:36.474340   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:36.557406   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:36.549415    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.550022    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551319    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551900    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.553579    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:36.549415    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.550022    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551319    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551900    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.553579    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:36.557430   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:36.557443   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:36.583387   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:36.583415   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:36.623378   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:36.623413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.666931   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:36.666964   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:36.696482   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:36.696513   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:36.730677   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:36.730708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:36.743357   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:36.743386   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:36.813864   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:36.813900   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:36.848686   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:36.848716   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:39.433464   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:39.444066   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:39.444136   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:39.471666   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:39.471686   92925 cri.go:89] found id: ""
	I1213 19:13:39.471693   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:39.471753   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.475549   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:39.475641   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:39.505541   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:39.505615   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:39.505645   92925 cri.go:89] found id: ""
	I1213 19:13:39.505667   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:39.505752   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.511310   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.515781   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:39.515898   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:39.545256   92925 cri.go:89] found id: ""
	I1213 19:13:39.545290   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.545300   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:39.545306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:39.545379   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:39.576057   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:39.576080   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:39.576085   92925 cri.go:89] found id: ""
	I1213 19:13:39.576092   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:39.576146   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.580177   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.584087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:39.584160   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:39.610819   92925 cri.go:89] found id: ""
	I1213 19:13:39.610843   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.610863   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:39.610871   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:39.610929   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:39.638458   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:39.638481   92925 cri.go:89] found id: ""
	I1213 19:13:39.638503   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:39.638564   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.642537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:39.642610   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:39.670872   92925 cri.go:89] found id: ""
	I1213 19:13:39.670951   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.670975   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:39.670998   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:39.671043   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:39.774702   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:39.774743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:39.846826   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:39.837968    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.838545    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.840574    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.841359    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.842988    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:39.837968    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.838545    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.840574    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.841359    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.842988    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:39.846849   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:39.846862   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:39.892712   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:39.892743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:39.960690   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:39.960729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:40.022528   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:40.022560   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:40.107424   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:40.107461   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:40.149433   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:40.149472   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:40.162446   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:40.162479   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:40.191980   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:40.192009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:40.239148   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:40.239228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:42.771936   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:42.782654   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:42.782726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:42.808850   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:42.808869   92925 cri.go:89] found id: ""
	I1213 19:13:42.808877   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:42.808938   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.812682   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:42.812753   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:42.840980   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:42.841072   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:42.841097   92925 cri.go:89] found id: ""
	I1213 19:13:42.841122   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:42.841210   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.844946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.848726   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:42.848811   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:42.888597   92925 cri.go:89] found id: ""
	I1213 19:13:42.888663   92925 logs.go:282] 0 containers: []
	W1213 19:13:42.888688   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:42.888707   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:42.888791   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:42.916253   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:42.916323   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:42.916341   92925 cri.go:89] found id: ""
	I1213 19:13:42.916364   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:42.916443   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.920031   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.923493   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:42.923565   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:42.950967   92925 cri.go:89] found id: ""
	I1213 19:13:42.950991   92925 logs.go:282] 0 containers: []
	W1213 19:13:42.950999   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:42.951005   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:42.951062   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:42.977861   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:42.977884   92925 cri.go:89] found id: ""
	I1213 19:13:42.977892   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:42.977946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.985150   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:42.985252   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:43.014767   92925 cri.go:89] found id: ""
	I1213 19:13:43.014794   92925 logs.go:282] 0 containers: []
	W1213 19:13:43.014803   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:43.014813   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:43.014826   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:43.089031   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:43.089070   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:43.152812   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:43.152840   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:43.253685   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:43.253720   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:43.268102   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:43.268130   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:43.342529   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:43.333442    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.333905    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.335923    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.336467    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.338397    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:43.333442    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.333905    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.335923    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.336467    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.338397    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:43.342553   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:43.342566   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:43.383957   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:43.383996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:43.431627   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:43.431662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:43.504349   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:43.504386   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:43.541135   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:43.541167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:43.570288   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:43.570315   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.101243   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:46.114537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:46.114605   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:46.142285   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:46.142310   92925 cri.go:89] found id: ""
	I1213 19:13:46.142319   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:46.142374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.146198   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:46.146275   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:46.172413   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:46.172485   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:46.172504   92925 cri.go:89] found id: ""
	I1213 19:13:46.172529   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:46.172649   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.176629   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.180398   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:46.180514   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:46.208892   92925 cri.go:89] found id: ""
	I1213 19:13:46.208925   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.208934   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:46.208942   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:46.209074   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:46.237365   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:46.237388   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:46.237394   92925 cri.go:89] found id: ""
	I1213 19:13:46.237401   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:46.237458   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.241815   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.245384   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:46.245482   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:46.272996   92925 cri.go:89] found id: ""
	I1213 19:13:46.273063   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.273072   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:46.273078   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:46.273160   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:46.302629   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.302654   92925 cri.go:89] found id: ""
	I1213 19:13:46.302663   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:46.302737   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.306762   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:46.306861   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:46.337280   92925 cri.go:89] found id: ""
	I1213 19:13:46.337346   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.337369   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:46.337384   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:46.337395   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:46.349174   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:46.349204   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:46.419942   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:46.411077    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.411612    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413348    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413991    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.415827    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:46.411077    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.411612    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413348    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413991    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.415827    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:46.419977   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:46.419993   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:46.446859   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:46.446885   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:46.487087   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:46.487124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:46.547232   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:46.547267   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:46.574826   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:46.574854   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.602584   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:46.602609   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:46.640086   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:46.640117   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:46.740777   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:46.740818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:46.812315   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:46.812357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:49.395199   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:49.405934   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:49.406009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:49.433789   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:49.433810   92925 cri.go:89] found id: ""
	I1213 19:13:49.433827   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:49.433883   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.437578   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:49.437651   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:49.471711   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:49.471734   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:49.471740   92925 cri.go:89] found id: ""
	I1213 19:13:49.471748   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:49.471801   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.475461   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.479094   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:49.479168   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:49.505391   92925 cri.go:89] found id: ""
	I1213 19:13:49.505417   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.505426   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:49.505433   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:49.505488   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:49.540863   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:49.540890   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:49.540895   92925 cri.go:89] found id: ""
	I1213 19:13:49.540903   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:49.540960   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.544771   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.548451   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:49.548524   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:49.575402   92925 cri.go:89] found id: ""
	I1213 19:13:49.575428   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.575436   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:49.575442   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:49.575501   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:49.605123   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:49.605143   92925 cri.go:89] found id: ""
	I1213 19:13:49.605151   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:49.605211   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.608919   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:49.609061   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:49.637050   92925 cri.go:89] found id: ""
	I1213 19:13:49.637075   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.637084   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:49.637093   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:49.637105   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:49.744000   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:49.744048   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:49.811345   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:49.802050    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.802444    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805468    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805922    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.807507    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:49.802050    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.802444    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805468    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805922    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.807507    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:49.811370   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:49.811384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:49.852043   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:49.852081   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:49.896314   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:49.896349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:49.924211   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:49.924240   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:50.006219   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:50.006263   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:50.039895   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:50.039978   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:50.054629   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:50.054656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:50.084937   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:50.084966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:50.159510   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:50.159553   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:52.688326   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:52.699486   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:52.699554   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:52.726195   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:52.726216   92925 cri.go:89] found id: ""
	I1213 19:13:52.726224   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:52.726280   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.730715   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:52.730785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:52.756911   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:52.756933   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:52.756938   92925 cri.go:89] found id: ""
	I1213 19:13:52.756946   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:52.757069   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.760788   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.764452   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:52.764551   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:52.790658   92925 cri.go:89] found id: ""
	I1213 19:13:52.790732   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.790749   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:52.790756   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:52.790816   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:52.818365   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:52.818388   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:52.818394   92925 cri.go:89] found id: ""
	I1213 19:13:52.818402   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:52.818477   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.822460   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.826054   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:52.826130   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:52.853218   92925 cri.go:89] found id: ""
	I1213 19:13:52.853245   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.853256   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:52.853262   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:52.853321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:52.879712   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:52.879736   92925 cri.go:89] found id: ""
	I1213 19:13:52.879744   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:52.879798   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.883563   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:52.883639   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:52.910499   92925 cri.go:89] found id: ""
	I1213 19:13:52.910526   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.910535   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:52.910545   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:52.910577   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:52.990183   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:52.990219   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:53.026776   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:53.026805   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:53.118043   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:53.107629    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.110332    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.111160    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.112144    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.113182    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:53.107629    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.110332    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.111160    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.112144    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.113182    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:53.118090   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:53.118141   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:53.160995   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:53.161190   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:53.204763   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:53.204795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:53.270772   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:53.270810   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:53.370857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:53.370895   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:53.383046   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:53.383074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:53.410648   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:53.410684   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:53.439739   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:53.439768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:55.970243   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:55.981613   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:55.981689   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:56.018614   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:56.018637   92925 cri.go:89] found id: ""
	I1213 19:13:56.018647   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:56.018707   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.022914   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:56.022990   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:56.056158   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:56.056182   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:56.056187   92925 cri.go:89] found id: ""
	I1213 19:13:56.056194   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:56.056275   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.061504   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.065201   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:56.065281   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:56.094861   92925 cri.go:89] found id: ""
	I1213 19:13:56.094887   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.094896   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:56.094903   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:56.094982   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:56.133165   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:56.133240   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:56.133260   92925 cri.go:89] found id: ""
	I1213 19:13:56.133291   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:56.133356   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.137225   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.140713   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:56.140785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:56.168013   92925 cri.go:89] found id: ""
	I1213 19:13:56.168039   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.168048   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:56.168055   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:56.168118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:56.196793   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:56.196867   92925 cri.go:89] found id: ""
	I1213 19:13:56.196876   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:56.196935   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.200591   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:56.200672   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:56.227851   92925 cri.go:89] found id: ""
	I1213 19:13:56.227877   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.227887   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:56.227896   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:56.227908   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:56.323380   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:56.323416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:56.337259   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:56.337289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:56.362908   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:56.362939   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:56.443333   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:56.443372   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:56.522467   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:56.511318    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.512215    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.514040    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.515835    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.516378    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:56.511318    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.512215    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.514040    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.515835    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.516378    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:56.522485   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:56.522498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:56.561809   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:56.561843   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:56.606943   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:56.606979   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:56.678268   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:56.678310   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:56.707280   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:56.707309   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:56.736890   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:56.736917   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:59.286954   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:59.298376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:59.298447   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:59.325376   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:59.325399   92925 cri.go:89] found id: ""
	I1213 19:13:59.325407   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:59.325464   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.329049   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:59.329123   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:59.356066   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:59.356085   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:59.356089   92925 cri.go:89] found id: ""
	I1213 19:13:59.356097   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:59.356150   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.360113   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.363660   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:59.363736   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:59.389568   92925 cri.go:89] found id: ""
	I1213 19:13:59.389594   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.389604   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:59.389611   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:59.389692   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:59.423243   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:59.423266   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:59.423270   92925 cri.go:89] found id: ""
	I1213 19:13:59.423278   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:59.423350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.426944   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.431770   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:59.431844   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:59.458103   92925 cri.go:89] found id: ""
	I1213 19:13:59.458173   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.458220   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:59.458246   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:59.458332   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:59.487250   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:59.487324   92925 cri.go:89] found id: ""
	I1213 19:13:59.487340   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:59.487406   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.491784   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:59.491852   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:59.525717   92925 cri.go:89] found id: ""
	I1213 19:13:59.525739   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.525748   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:59.525756   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:59.525768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:59.554063   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:59.554091   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:59.599874   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:59.599909   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:59.626733   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:59.626765   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:59.700778   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:59.700814   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:59.713358   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:59.713388   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:59.783137   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:59.774677   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.775356   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.776867   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.777580   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.778486   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:59.774677   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.775356   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.776867   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.777580   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.778486   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:59.783158   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:59.783169   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:59.832218   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:59.832248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:59.901253   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:59.901329   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:59.930678   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:59.930701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:59.962070   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:59.962099   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:02.744450   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:02.755514   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:02.755587   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:02.782984   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:02.783079   92925 cri.go:89] found id: ""
	I1213 19:14:02.783095   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:02.783157   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.787187   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:02.787262   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:02.814931   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:02.814954   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:02.814959   92925 cri.go:89] found id: ""
	I1213 19:14:02.814967   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:02.815031   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.818983   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.822788   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:02.822865   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:02.848942   92925 cri.go:89] found id: ""
	I1213 19:14:02.848966   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.848975   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:02.848991   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:02.849096   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:02.876134   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:02.876155   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:02.876160   92925 cri.go:89] found id: ""
	I1213 19:14:02.876168   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:02.876249   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.880576   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.885335   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:02.885459   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:02.913660   92925 cri.go:89] found id: ""
	I1213 19:14:02.913733   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.913763   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:02.913802   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:02.913924   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:02.940178   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:02.940248   92925 cri.go:89] found id: ""
	I1213 19:14:02.940270   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:02.940359   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.944376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:02.944500   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:02.975815   92925 cri.go:89] found id: ""
	I1213 19:14:02.975838   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.975846   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:02.975855   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:02.975867   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:03.074688   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:03.074723   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:03.156277   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:03.147816   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.148501   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150174   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150777   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.152270   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:03.147816   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.148501   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150174   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150777   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.152270   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:03.156299   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:03.156311   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:03.182450   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:03.182477   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:03.221147   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:03.221181   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:03.292920   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:03.292962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:03.323958   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:03.323983   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:03.397255   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:03.397289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:03.410296   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:03.410325   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:03.465930   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:03.465966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:03.497989   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:03.498017   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:06.058798   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:06.069576   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:06.069643   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:06.097652   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:06.097675   92925 cri.go:89] found id: ""
	I1213 19:14:06.097684   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:06.097767   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.103860   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:06.103983   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:06.133321   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:06.133354   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:06.133359   92925 cri.go:89] found id: ""
	I1213 19:14:06.133367   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:06.133434   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.137349   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.140932   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:06.141036   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:06.174768   92925 cri.go:89] found id: ""
	I1213 19:14:06.174796   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.174806   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:06.174813   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:06.174923   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:06.202214   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:06.202245   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:06.202249   92925 cri.go:89] found id: ""
	I1213 19:14:06.202257   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:06.202315   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.206201   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.209869   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:06.209950   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:06.240738   92925 cri.go:89] found id: ""
	I1213 19:14:06.240762   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.240771   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:06.240777   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:06.240838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:06.267045   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:06.267067   92925 cri.go:89] found id: ""
	I1213 19:14:06.267076   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:06.267134   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.270950   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:06.271059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:06.298538   92925 cri.go:89] found id: ""
	I1213 19:14:06.298566   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.298576   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:06.298585   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:06.298600   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:06.401303   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:06.401348   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:06.414599   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:06.414631   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:06.441984   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:06.442056   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:06.481290   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:06.481321   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:06.541131   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:06.541162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:06.614944   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:06.614978   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:06.700895   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:06.700937   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:06.734007   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:06.734036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:06.804578   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:06.795862   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.796443   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798255   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798765   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.800521   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:06.795862   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.796443   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798255   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798765   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.800521   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:06.804604   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:06.804616   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:06.832247   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:06.832275   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.358770   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:09.369376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:09.369446   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:09.397174   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:09.397250   92925 cri.go:89] found id: ""
	I1213 19:14:09.397268   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:09.397341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.401282   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:09.401379   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:09.430806   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:09.430829   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:09.430834   92925 cri.go:89] found id: ""
	I1213 19:14:09.430842   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:09.430895   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.434593   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.437861   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:09.437931   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:09.462972   92925 cri.go:89] found id: ""
	I1213 19:14:09.463040   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.463067   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:09.463087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:09.463154   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:09.489906   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:09.489930   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:09.489935   92925 cri.go:89] found id: ""
	I1213 19:14:09.489943   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:09.490000   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.493996   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.497780   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:09.497895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:09.529207   92925 cri.go:89] found id: ""
	I1213 19:14:09.529232   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.529241   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:09.529280   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:09.529364   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:09.556267   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.556289   92925 cri.go:89] found id: ""
	I1213 19:14:09.556297   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:09.556383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.560687   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:09.560770   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:09.592345   92925 cri.go:89] found id: ""
	I1213 19:14:09.592380   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.592389   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:09.592398   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:09.592410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:09.604889   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:09.604917   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:09.631468   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:09.631498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:09.670679   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:09.670712   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:09.715815   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:09.715851   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.743494   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:09.743523   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:09.775725   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:09.775753   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:09.873965   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:09.874039   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:09.959605   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:09.948036   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.948708   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950229   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950803   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.952453   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:09.948036   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.948708   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950229   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950803   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.952453   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:09.959680   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:09.959707   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:10.051190   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:10.051228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:10.086712   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:10.086738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:12.672644   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:12.683960   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:12.684058   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:12.712689   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:12.712710   92925 cri.go:89] found id: ""
	I1213 19:14:12.712718   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:12.712772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.716732   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:12.716806   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:12.744449   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:12.744468   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:12.744473   92925 cri.go:89] found id: ""
	I1213 19:14:12.744480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:12.744548   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.748558   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.752120   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:12.752195   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:12.779575   92925 cri.go:89] found id: ""
	I1213 19:14:12.779602   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.779611   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:12.779617   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:12.779677   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:12.808259   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:12.808279   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:12.808284   92925 cri.go:89] found id: ""
	I1213 19:14:12.808292   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:12.808348   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.812274   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.816250   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:12.816380   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:12.842528   92925 cri.go:89] found id: ""
	I1213 19:14:12.842556   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.842566   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:12.842572   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:12.842655   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:12.870846   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:12.870916   92925 cri.go:89] found id: ""
	I1213 19:14:12.870939   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:12.871003   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.874709   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:12.874809   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:12.901168   92925 cri.go:89] found id: ""
	I1213 19:14:12.901194   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.901203   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:12.901212   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:12.901224   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:12.993856   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:12.993888   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:13.006289   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:13.006320   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:13.038515   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:13.038544   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:13.101746   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:13.101795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:13.153697   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:13.153736   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:13.183337   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:13.183366   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:13.262960   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:13.262995   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:13.297818   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:13.297845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:13.368622   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:13.360485   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.361349   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363057   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363352   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.364843   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:13.360485   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.361349   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363057   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363352   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.364843   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:13.368650   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:13.368664   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:13.439804   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:13.439843   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:15.976229   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:15.989077   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:15.989247   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:16.020054   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:16.020079   92925 cri.go:89] found id: ""
	I1213 19:14:16.020087   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:16.020158   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.024026   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:16.024118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:16.051647   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:16.051670   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:16.051681   92925 cri.go:89] found id: ""
	I1213 19:14:16.051688   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:16.051772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.055489   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.059115   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:16.059234   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:16.086414   92925 cri.go:89] found id: ""
	I1213 19:14:16.086438   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.086447   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:16.086453   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:16.086513   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:16.118349   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:16.118415   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:16.118434   92925 cri.go:89] found id: ""
	I1213 19:14:16.118458   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:16.118545   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.122398   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.129488   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:16.129561   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:16.156699   92925 cri.go:89] found id: ""
	I1213 19:14:16.156725   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.156734   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:16.156740   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:16.156799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:16.183419   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:16.183444   92925 cri.go:89] found id: ""
	I1213 19:14:16.183465   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:16.183520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.187500   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:16.187599   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:16.213532   92925 cri.go:89] found id: ""
	I1213 19:14:16.213610   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.213634   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:16.213657   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:16.213703   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:16.225956   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:16.225985   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:16.299377   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:16.290117   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.291089   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.292835   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.293694   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.295412   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:16.290117   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.291089   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.292835   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.293694   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.295412   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:16.299401   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:16.299416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:16.327259   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:16.327288   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:16.353346   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:16.353376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:16.380053   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:16.380079   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:16.415886   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:16.415918   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:16.512571   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:16.512605   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:16.557415   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:16.557451   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:16.616391   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:16.616424   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:16.692096   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:16.692131   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:19.277525   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:19.287988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:19.288109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:19.314035   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:19.314055   92925 cri.go:89] found id: ""
	I1213 19:14:19.314064   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:19.314137   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.317785   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:19.317856   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:19.344128   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:19.344151   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:19.344155   92925 cri.go:89] found id: ""
	I1213 19:14:19.344163   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:19.344216   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.348619   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.351872   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:19.351961   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:19.377237   92925 cri.go:89] found id: ""
	I1213 19:14:19.377263   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.377272   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:19.377278   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:19.377360   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:19.404210   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:19.404233   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:19.404238   92925 cri.go:89] found id: ""
	I1213 19:14:19.404245   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:19.404318   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.407909   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.411268   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:19.411336   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:19.437051   92925 cri.go:89] found id: ""
	I1213 19:14:19.437075   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.437083   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:19.437089   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:19.437147   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:19.461816   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:19.461847   92925 cri.go:89] found id: ""
	I1213 19:14:19.461856   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:19.461911   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.465492   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:19.465587   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:19.491501   92925 cri.go:89] found id: ""
	I1213 19:14:19.491527   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.491536   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:19.491545   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:19.491588   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:19.530624   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:19.530652   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:19.570388   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:19.570423   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:19.649601   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:19.649638   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:19.682548   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:19.682579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:19.765347   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:19.765383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:19.797401   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:19.797430   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:19.892983   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:19.893036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:19.905252   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:19.905281   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:19.976038   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:19.968048   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.968518   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.969788   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.970473   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.972132   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:19.968048   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.968518   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.969788   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.970473   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.972132   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:19.976061   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:19.976074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:20.015893   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:20.015932   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:22.580793   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:22.591726   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:22.591801   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:22.617941   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:22.617972   92925 cri.go:89] found id: ""
	I1213 19:14:22.617981   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:22.618039   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.621895   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:22.621967   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:22.648715   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:22.648778   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:22.648797   92925 cri.go:89] found id: ""
	I1213 19:14:22.648821   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:22.648904   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.653305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.657032   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:22.657104   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:22.686906   92925 cri.go:89] found id: ""
	I1213 19:14:22.686932   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.686946   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:22.686952   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:22.687013   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:22.714929   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:22.714951   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:22.714956   92925 cri.go:89] found id: ""
	I1213 19:14:22.714964   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:22.715025   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.719071   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.722714   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:22.722784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:22.750440   92925 cri.go:89] found id: ""
	I1213 19:14:22.750470   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.750480   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:22.750486   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:22.750549   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:22.777550   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:22.777572   92925 cri.go:89] found id: ""
	I1213 19:14:22.777580   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:22.777635   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.781380   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:22.781475   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:22.816511   92925 cri.go:89] found id: ""
	I1213 19:14:22.816537   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.816547   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:22.816572   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:22.816617   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:22.842295   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:22.842322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:22.882060   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:22.882095   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:22.965336   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:22.965374   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:22.995696   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:22.995731   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:23.098694   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:23.098782   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:23.117712   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:23.117743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:23.167456   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:23.167497   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:23.195171   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:23.195199   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:23.279228   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:23.279264   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:23.318709   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:23.318738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:23.384532   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:23.376056   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.376628   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.378283   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379367   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379806   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:23.376056   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.376628   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.378283   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379367   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379806   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:25.885566   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:25.896623   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:25.896696   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:25.924503   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:25.924535   92925 cri.go:89] found id: ""
	I1213 19:14:25.924544   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:25.924601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.928341   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:25.928413   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:25.966385   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:25.966404   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:25.966409   92925 cri.go:89] found id: ""
	I1213 19:14:25.966417   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:25.966471   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.970190   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.974101   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:25.974229   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:26.004380   92925 cri.go:89] found id: ""
	I1213 19:14:26.004456   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.004479   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:26.004498   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:26.004595   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:26.031828   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:26.031853   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:26.031860   92925 cri.go:89] found id: ""
	I1213 19:14:26.031868   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:26.031925   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.036387   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.040161   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:26.040235   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:26.070525   92925 cri.go:89] found id: ""
	I1213 19:14:26.070591   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.070616   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:26.070635   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:26.070724   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:26.108253   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:26.108277   92925 cri.go:89] found id: ""
	I1213 19:14:26.108294   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:26.108373   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.112191   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:26.112324   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:26.146018   92925 cri.go:89] found id: ""
	I1213 19:14:26.146042   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.146052   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:26.146060   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:26.146094   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:26.187197   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:26.187229   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:26.232694   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:26.232724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:26.310398   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:26.310435   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:26.323748   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:26.323775   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:26.350662   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:26.350689   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:26.380636   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:26.380707   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:26.407064   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:26.407089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:26.483950   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:26.483984   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:26.536817   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:26.536846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:26.654750   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:26.654801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:26.733679   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:26.725319   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.726046   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.727714   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.728228   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.729870   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:26.725319   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.726046   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.727714   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.728228   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.729870   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:29.233968   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:29.244666   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:29.244746   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:29.272994   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:29.273043   92925 cri.go:89] found id: ""
	I1213 19:14:29.273051   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:29.273108   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.277950   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:29.278022   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:29.304315   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:29.304334   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:29.304338   92925 cri.go:89] found id: ""
	I1213 19:14:29.304346   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:29.304402   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.308379   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.311905   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:29.311974   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:29.337925   92925 cri.go:89] found id: ""
	I1213 19:14:29.337953   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.337962   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:29.337968   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:29.338028   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:29.365135   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:29.365156   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:29.365160   92925 cri.go:89] found id: ""
	I1213 19:14:29.365167   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:29.365222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.368867   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.372263   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:29.372334   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:29.403367   92925 cri.go:89] found id: ""
	I1213 19:14:29.403393   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.403402   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:29.403408   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:29.403466   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:29.429639   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:29.429703   92925 cri.go:89] found id: ""
	I1213 19:14:29.429718   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:29.429782   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.433301   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:29.433373   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:29.460244   92925 cri.go:89] found id: ""
	I1213 19:14:29.460272   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.460282   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:29.460291   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:29.460302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:29.555127   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:29.555166   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:29.583790   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:29.583827   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:29.646377   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:29.646409   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:29.720554   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:29.720592   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:29.751659   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:29.751686   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:29.788857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:29.788883   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:29.800809   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:29.800844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:29.869250   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:29.862112   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.862682   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864146   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864555   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.865755   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:29.862112   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.862682   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864146   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864555   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.865755   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:29.869274   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:29.869287   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:29.913688   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:29.913724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:29.956382   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:29.956408   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:32.553678   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:32.565396   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:32.565470   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:32.592588   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:32.592613   92925 cri.go:89] found id: ""
	I1213 19:14:32.592622   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:32.592684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.596429   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:32.596509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:32.624469   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:32.624493   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:32.624499   92925 cri.go:89] found id: ""
	I1213 19:14:32.624506   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:32.624559   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.628270   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.631873   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:32.632003   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:32.657120   92925 cri.go:89] found id: ""
	I1213 19:14:32.657144   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.657153   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:32.657159   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:32.657220   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:32.684878   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:32.684901   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:32.684906   92925 cri.go:89] found id: ""
	I1213 19:14:32.684914   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:32.684976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.689235   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.692754   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:32.692825   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:32.722855   92925 cri.go:89] found id: ""
	I1213 19:14:32.722878   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.722887   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:32.722893   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:32.722952   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:32.753685   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:32.753704   92925 cri.go:89] found id: ""
	I1213 19:14:32.753712   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:32.753764   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.758129   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:32.758214   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:32.784526   92925 cri.go:89] found id: ""
	I1213 19:14:32.784599   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.784623   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:32.784645   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:32.784683   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:32.826015   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:32.826050   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:32.915444   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:32.915483   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:32.943132   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:32.943167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:33.017904   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:33.017945   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:33.050228   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:33.050258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:33.122559   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:33.114436   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.115150   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.116863   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.117500   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.118980   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:33.114436   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.115150   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.116863   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.117500   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.118980   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:33.122583   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:33.122597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:33.177421   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:33.177455   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:33.206989   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:33.207016   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:33.305130   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:33.305169   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:33.319318   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:33.319416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:35.847899   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:35.859028   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:35.859101   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:35.887722   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:35.887745   92925 cri.go:89] found id: ""
	I1213 19:14:35.887754   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:35.887807   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.891699   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:35.891771   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:35.920114   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:35.920138   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:35.920144   92925 cri.go:89] found id: ""
	I1213 19:14:35.920152   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:35.920222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.923937   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.927605   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:35.927678   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:35.953980   92925 cri.go:89] found id: ""
	I1213 19:14:35.954007   92925 logs.go:282] 0 containers: []
	W1213 19:14:35.954016   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:35.954023   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:35.954080   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:35.980645   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:35.980665   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:35.980670   92925 cri.go:89] found id: ""
	I1213 19:14:35.980678   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:35.980742   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.991946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.996641   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:35.996726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:36.026202   92925 cri.go:89] found id: ""
	I1213 19:14:36.026228   92925 logs.go:282] 0 containers: []
	W1213 19:14:36.026238   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:36.026245   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:36.026350   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:36.051979   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:36.052001   92925 cri.go:89] found id: ""
	I1213 19:14:36.052010   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:36.052066   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:36.055868   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:36.055938   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:36.083649   92925 cri.go:89] found id: ""
	I1213 19:14:36.083675   92925 logs.go:282] 0 containers: []
	W1213 19:14:36.083685   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:36.083693   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:36.083704   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:36.164414   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:36.164464   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:36.198766   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:36.198793   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:36.298985   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:36.299028   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:36.346466   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:36.346498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:36.376231   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:36.376258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:36.403571   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:36.403597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:36.417684   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:36.417714   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:36.487562   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:36.479494   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.480246   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.481848   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.482211   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.483808   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:36.479494   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.480246   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.481848   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.482211   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.483808   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:36.487585   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:36.487597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:36.514488   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:36.514514   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:36.559954   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:36.559990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:39.133526   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:39.150754   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:39.150826   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:39.179295   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:39.179315   92925 cri.go:89] found id: ""
	I1213 19:14:39.179324   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:39.179380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.185538   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:39.185605   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:39.216427   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:39.216449   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:39.216454   92925 cri.go:89] found id: ""
	I1213 19:14:39.216462   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:39.216517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.221041   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.225622   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:39.225691   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:39.251922   92925 cri.go:89] found id: ""
	I1213 19:14:39.251946   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.251955   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:39.251961   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:39.252019   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:39.281875   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:39.281900   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:39.281905   92925 cri.go:89] found id: ""
	I1213 19:14:39.281912   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:39.281970   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.286420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.290568   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:39.290663   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:39.315894   92925 cri.go:89] found id: ""
	I1213 19:14:39.315996   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.316021   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:39.316041   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:39.316153   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:39.344960   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:39.344983   92925 cri.go:89] found id: ""
	I1213 19:14:39.344992   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:39.345091   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.348776   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:39.348847   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:39.378840   92925 cri.go:89] found id: ""
	I1213 19:14:39.378862   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.378870   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:39.378879   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:39.378890   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:39.410058   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:39.410087   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:39.510110   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:39.510188   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:39.542821   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:39.542892   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:39.614365   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:39.605214   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.606127   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.607756   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.608303   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.610109   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:39.605214   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.606127   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.607756   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.608303   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.610109   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:39.614387   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:39.614403   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:39.656166   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:39.656199   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:39.700850   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:39.700887   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:39.735225   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:39.735267   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:39.765360   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:39.765396   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:39.856068   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:39.856115   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:39.883708   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:39.883738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.458661   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:42.469945   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:42.470018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:42.497805   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:42.497831   92925 cri.go:89] found id: ""
	I1213 19:14:42.497840   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:42.497898   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.502059   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:42.502128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:42.534485   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:42.534509   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:42.534514   92925 cri.go:89] found id: ""
	I1213 19:14:42.534521   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:42.534578   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.539929   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.544534   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:42.544618   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:42.572959   92925 cri.go:89] found id: ""
	I1213 19:14:42.572983   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.572991   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:42.572998   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:42.573085   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:42.605231   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.605253   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:42.605257   92925 cri.go:89] found id: ""
	I1213 19:14:42.605265   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:42.605324   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.609379   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.613098   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:42.613183   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:42.641856   92925 cri.go:89] found id: ""
	I1213 19:14:42.641881   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.641890   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:42.641897   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:42.641956   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:42.670835   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:42.670862   92925 cri.go:89] found id: ""
	I1213 19:14:42.670870   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:42.670923   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.674669   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:42.674780   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:42.701820   92925 cri.go:89] found id: ""
	I1213 19:14:42.701886   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.701912   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:42.701935   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:42.701974   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:42.795111   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:42.795148   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:42.843272   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:42.843308   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.918660   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:42.918701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:42.953437   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:42.953470   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:42.980705   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:42.980735   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:43.075228   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:43.075266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:43.089833   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:43.089865   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:43.165554   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:43.156189   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.157143   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.158950   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.160521   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.161743   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:43.156189   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.157143   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.158950   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.160521   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.161743   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:43.165619   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:43.165648   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:43.195772   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:43.195850   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:43.266745   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:43.266781   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:45.800090   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:45.811228   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:45.811319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:45.844476   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:45.844562   92925 cri.go:89] found id: ""
	I1213 19:14:45.844585   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:45.844658   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.848635   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:45.848730   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:45.878507   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:45.878532   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:45.878537   92925 cri.go:89] found id: ""
	I1213 19:14:45.878545   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:45.878626   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.883362   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.887015   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:45.887090   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:45.922472   92925 cri.go:89] found id: ""
	I1213 19:14:45.922495   92925 logs.go:282] 0 containers: []
	W1213 19:14:45.922504   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:45.922510   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:45.922571   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:45.961736   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:45.961766   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:45.961772   92925 cri.go:89] found id: ""
	I1213 19:14:45.961779   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:45.961846   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.965883   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.969985   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:45.970062   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:46.005121   92925 cri.go:89] found id: ""
	I1213 19:14:46.005143   92925 logs.go:282] 0 containers: []
	W1213 19:14:46.005153   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:46.005159   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:46.005218   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:46.033851   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:46.033871   92925 cri.go:89] found id: ""
	I1213 19:14:46.033878   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:46.033932   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:46.037737   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:46.037813   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:46.064426   92925 cri.go:89] found id: ""
	I1213 19:14:46.064493   92925 logs.go:282] 0 containers: []
	W1213 19:14:46.064517   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:46.064541   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:46.064580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:46.162246   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:46.162285   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:46.175470   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:46.175500   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:46.249273   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:46.239319   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.240280   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242150   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242816   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.244382   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:46.239319   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.240280   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242150   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242816   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.244382   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:46.249333   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:46.249347   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:46.277985   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:46.278016   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:46.332032   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:46.332065   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:46.376410   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:46.376446   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:46.455695   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:46.455772   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:46.485453   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:46.485479   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:46.522886   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:46.522916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:46.601217   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:46.601253   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:49.142956   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:49.157230   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:49.157309   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:49.185733   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:49.185767   92925 cri.go:89] found id: ""
	I1213 19:14:49.185775   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:49.185830   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.190180   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:49.190249   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:49.218248   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:49.218271   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:49.218276   92925 cri.go:89] found id: ""
	I1213 19:14:49.218285   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:49.218343   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.222331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.226027   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:49.226107   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:49.258473   92925 cri.go:89] found id: ""
	I1213 19:14:49.258496   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.258504   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:49.258512   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:49.258570   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:49.285496   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:49.285560   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:49.285578   92925 cri.go:89] found id: ""
	I1213 19:14:49.285601   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:49.285684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.291508   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.296197   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:49.296358   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:49.325094   92925 cri.go:89] found id: ""
	I1213 19:14:49.325119   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.325127   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:49.325134   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:49.325193   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:49.350750   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:49.350777   92925 cri.go:89] found id: ""
	I1213 19:14:49.350794   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:49.350857   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.354789   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:49.354915   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:49.381275   92925 cri.go:89] found id: ""
	I1213 19:14:49.381302   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.381311   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:49.381320   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:49.381331   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:49.473722   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:49.473760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:49.486016   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:49.486083   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:49.523030   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:49.523060   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:49.602664   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:49.602699   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:49.685307   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:49.685343   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:49.720678   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:49.720706   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:49.787762   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:49.779084   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.779733   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.781504   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.782055   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.783675   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:49.779084   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.779733   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.781504   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.782055   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.783675   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:49.787782   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:49.787795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:49.826153   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:49.826188   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:49.871719   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:49.871752   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:49.902768   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:49.902858   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:52.432900   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:52.443527   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:52.443639   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:52.470204   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:52.470237   92925 cri.go:89] found id: ""
	I1213 19:14:52.470247   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:52.470302   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.473971   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:52.474058   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:52.501963   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:52.501983   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:52.501987   92925 cri.go:89] found id: ""
	I1213 19:14:52.501994   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:52.502048   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.505744   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.509295   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:52.509368   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:52.534850   92925 cri.go:89] found id: ""
	I1213 19:14:52.534917   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.534943   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:52.534959   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:52.535033   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:52.570973   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:52.571045   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:52.571066   92925 cri.go:89] found id: ""
	I1213 19:14:52.571086   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:52.571156   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.574824   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.578317   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:52.578384   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:52.606849   92925 cri.go:89] found id: ""
	I1213 19:14:52.606873   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.606882   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:52.606888   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:52.606945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:52.633073   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:52.633095   92925 cri.go:89] found id: ""
	I1213 19:14:52.633103   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:52.633169   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.636819   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:52.636895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:52.663310   92925 cri.go:89] found id: ""
	I1213 19:14:52.663333   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.663342   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:52.663350   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:52.663363   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:52.732904   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:52.724948   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.725610   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727167   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727671   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.729366   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:52.724948   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.725610   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727167   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727671   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.729366   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:52.732929   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:52.732943   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:52.771098   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:52.771129   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:52.846025   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:52.846063   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:52.888075   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:52.888104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:52.992414   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:52.992452   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:53.007058   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:53.007089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:53.034812   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:53.034841   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:53.078790   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:53.078828   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:53.134673   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:53.134708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:53.162943   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:53.162969   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:55.740743   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:55.751731   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:55.751816   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:55.779888   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:55.779908   92925 cri.go:89] found id: ""
	I1213 19:14:55.779916   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:55.779976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.783761   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:55.783831   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:55.810156   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:55.810175   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:55.810185   92925 cri.go:89] found id: ""
	I1213 19:14:55.810192   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:55.810252   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.814013   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.817577   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:55.817649   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:55.843468   92925 cri.go:89] found id: ""
	I1213 19:14:55.843491   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.843499   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:55.843505   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:55.843561   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:55.870048   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:55.870081   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:55.870093   92925 cri.go:89] found id: ""
	I1213 19:14:55.870100   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:55.870158   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.874026   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.877764   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:55.877852   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:55.907873   92925 cri.go:89] found id: ""
	I1213 19:14:55.907900   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.907909   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:55.907915   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:55.907976   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:55.934710   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:55.934732   92925 cri.go:89] found id: ""
	I1213 19:14:55.934740   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:55.934795   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.938598   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:55.938671   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:55.968271   92925 cri.go:89] found id: ""
	I1213 19:14:55.968337   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.968361   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:55.968387   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:55.968416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:56.002213   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:56.002285   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:56.029658   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:56.029741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:56.125956   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:56.126039   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:56.139465   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:56.139492   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:56.191699   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:56.191735   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:56.278131   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:56.278179   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:56.314251   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:56.314283   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:56.383224   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:56.373948   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.374799   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.376672   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.377083   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.378823   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:56.373948   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.374799   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.376672   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.377083   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.378823   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:56.383248   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:56.383261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:56.410961   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:56.410990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:56.450595   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:56.450633   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.032642   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:59.043619   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:59.043712   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:59.070836   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:59.070859   92925 cri.go:89] found id: ""
	I1213 19:14:59.070867   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:59.070934   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.074933   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:59.075009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:59.112290   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:59.112313   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:59.112318   92925 cri.go:89] found id: ""
	I1213 19:14:59.112325   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:59.112380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.117374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.121073   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:59.121166   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:59.159645   92925 cri.go:89] found id: ""
	I1213 19:14:59.159714   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.159741   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:59.159763   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:59.159838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:59.193406   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.193430   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:59.193435   92925 cri.go:89] found id: ""
	I1213 19:14:59.193443   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:59.193524   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.197329   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.201001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:59.201109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:59.227682   92925 cri.go:89] found id: ""
	I1213 19:14:59.227706   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.227715   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:59.227721   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:59.227784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:59.254466   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:59.254497   92925 cri.go:89] found id: ""
	I1213 19:14:59.254505   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:59.254561   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.258458   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:59.258530   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:59.285792   92925 cri.go:89] found id: ""
	I1213 19:14:59.285817   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.285826   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:59.285835   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:59.285851   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:59.312955   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:59.312990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:59.394158   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:59.394195   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:59.439055   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:59.439084   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:59.452200   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:59.452253   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:59.543624   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:59.535183   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.536016   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.537681   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.538269   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.539987   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:59.535183   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.536016   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.537681   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.538269   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.539987   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:59.543645   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:59.543659   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:59.571506   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:59.571533   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:59.615595   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:59.615634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:59.717216   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:59.717256   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:59.764205   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:59.764243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.840500   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:59.840538   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.367252   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:02.379179   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:02.379252   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:02.407368   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:02.407394   92925 cri.go:89] found id: ""
	I1213 19:15:02.407402   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:02.407464   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.411245   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:02.411321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:02.439707   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:02.439727   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:02.439732   92925 cri.go:89] found id: ""
	I1213 19:15:02.439739   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:02.439793   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.443520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.447838   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:02.447965   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:02.475049   92925 cri.go:89] found id: ""
	I1213 19:15:02.475077   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.475086   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:02.475093   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:02.475153   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:02.509558   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:02.509582   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.509587   92925 cri.go:89] found id: ""
	I1213 19:15:02.509595   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:02.509652   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.513964   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.519816   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:02.519888   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:02.549572   92925 cri.go:89] found id: ""
	I1213 19:15:02.549639   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.549653   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:02.549660   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:02.549720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:02.578189   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:02.578215   92925 cri.go:89] found id: ""
	I1213 19:15:02.578224   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:02.578287   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.582094   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:02.582166   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:02.609748   92925 cri.go:89] found id: ""
	I1213 19:15:02.609774   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.609783   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:02.609792   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:02.609823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:02.660274   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:02.660313   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:02.737557   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:02.737590   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:02.821155   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:02.821193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:02.853468   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:02.853501   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:02.866631   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:02.866661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:02.895294   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:02.895323   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:02.940697   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:02.940734   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.970055   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:02.970088   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:03.002379   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:03.002409   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:03.096355   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:03.096390   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:03.189863   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:03.181408   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.182165   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.183899   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.184754   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.186389   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:03.181408   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.182165   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.183899   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.184754   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.186389   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:05.690514   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:05.702677   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:05.702772   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:05.730136   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:05.730160   92925 cri.go:89] found id: ""
	I1213 19:15:05.730169   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:05.730226   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.733966   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:05.734047   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:05.761337   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:05.761404   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:05.761425   92925 cri.go:89] found id: ""
	I1213 19:15:05.761450   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:05.761534   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.766511   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.770470   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:05.770545   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:05.803220   92925 cri.go:89] found id: ""
	I1213 19:15:05.803284   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.803300   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:05.803306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:05.803383   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:05.831772   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:05.831797   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:05.831803   92925 cri.go:89] found id: ""
	I1213 19:15:05.831810   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:05.831869   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.835814   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.839281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:05.839351   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:05.870011   92925 cri.go:89] found id: ""
	I1213 19:15:05.870038   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.870059   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:05.870065   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:05.870126   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:05.898850   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:05.898877   92925 cri.go:89] found id: ""
	I1213 19:15:05.898888   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:05.898943   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.903063   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:05.903177   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:05.930061   92925 cri.go:89] found id: ""
	I1213 19:15:05.930126   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.930140   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:05.930150   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:05.930164   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:05.943518   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:05.943549   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:05.973699   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:05.973729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:06.024591   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:06.024622   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:06.131997   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:06.132041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:06.202110   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:06.193932   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.195174   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.196901   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.197593   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.198598   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:06.193932   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.195174   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.196901   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.197593   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.198598   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:06.202133   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:06.202145   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:06.241491   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:06.241525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:06.289002   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:06.289076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:06.376385   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:06.376422   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:06.406893   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:06.406920   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:06.438586   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:06.438615   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:09.021141   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:09.032497   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:09.032597   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:09.061840   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:09.061871   92925 cri.go:89] found id: ""
	I1213 19:15:09.061881   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:09.061939   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.065632   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:09.065706   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:09.094419   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:09.094444   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:09.094449   92925 cri.go:89] found id: ""
	I1213 19:15:09.094456   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:09.094517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.098305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.108354   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:09.108432   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:09.137672   92925 cri.go:89] found id: ""
	I1213 19:15:09.137706   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.137716   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:09.137722   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:09.137785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:09.170831   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:09.170854   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:09.170859   92925 cri.go:89] found id: ""
	I1213 19:15:09.170866   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:09.170929   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.174672   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.177949   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:09.178023   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:09.208255   92925 cri.go:89] found id: ""
	I1213 19:15:09.208282   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.208291   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:09.208297   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:09.208352   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:09.234350   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:09.234373   92925 cri.go:89] found id: ""
	I1213 19:15:09.234381   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:09.234453   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.238030   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:09.238102   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:09.264310   92925 cri.go:89] found id: ""
	I1213 19:15:09.264335   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.264344   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:09.264352   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:09.264365   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:09.295245   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:09.295276   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:09.369835   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:09.369869   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:09.472350   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:09.472384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:09.500555   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:09.500589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:09.535996   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:09.536032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:09.552067   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:09.552096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:09.624766   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:09.616285   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.617238   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.618950   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.619348   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.620912   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:09.616285   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.617238   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.618950   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.619348   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.620912   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:09.624810   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:09.624823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:09.654769   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:09.654796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:09.695636   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:09.695711   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:09.740840   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:09.740873   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.330150   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:12.341327   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:12.341430   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:12.373666   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:12.373692   92925 cri.go:89] found id: ""
	I1213 19:15:12.373699   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:12.373760   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.377493   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:12.377563   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:12.407860   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:12.407882   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:12.407886   92925 cri.go:89] found id: ""
	I1213 19:15:12.407897   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:12.407965   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.411939   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.416613   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:12.416687   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:12.447044   92925 cri.go:89] found id: ""
	I1213 19:15:12.447071   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.447080   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:12.447086   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:12.447149   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:12.474565   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.474599   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:12.474604   92925 cri.go:89] found id: ""
	I1213 19:15:12.474612   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:12.474669   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.478501   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.482327   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:12.482425   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:12.519207   92925 cri.go:89] found id: ""
	I1213 19:15:12.519235   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.519245   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:12.519252   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:12.519330   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:12.548236   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:12.548259   92925 cri.go:89] found id: ""
	I1213 19:15:12.548269   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:12.548334   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.552167   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:12.552292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:12.581061   92925 cri.go:89] found id: ""
	I1213 19:15:12.581086   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.581094   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:12.581103   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:12.581115   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:12.626762   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:12.626795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:12.676771   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:12.676803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:12.708623   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:12.708661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:12.735332   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:12.735361   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:12.830566   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:12.830606   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:12.858035   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:12.858107   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.953406   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:12.953445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:13.037585   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:13.037626   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:13.070076   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:13.070108   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:13.083239   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:13.083266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:13.171369   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:13.163050   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.163831   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.165471   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.166105   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.167624   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:13.163050   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.163831   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.165471   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.166105   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.167624   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:15.672265   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:15.683518   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:15.683589   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:15.713736   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:15.713764   92925 cri.go:89] found id: ""
	I1213 19:15:15.713773   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:15.713845   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.718041   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:15.718116   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:15.745439   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:15.745462   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:15.745467   92925 cri.go:89] found id: ""
	I1213 19:15:15.745475   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:15.745555   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.749679   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.753271   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:15.753343   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:15.780766   92925 cri.go:89] found id: ""
	I1213 19:15:15.780791   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.780800   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:15.780806   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:15.780867   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:15.809433   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:15.809453   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:15.809458   92925 cri.go:89] found id: ""
	I1213 19:15:15.809466   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:15.809521   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.813350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.816829   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:15.816899   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:15.843466   92925 cri.go:89] found id: ""
	I1213 19:15:15.843491   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.843501   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:15.843507   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:15.843566   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:15.869979   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:15.870003   92925 cri.go:89] found id: ""
	I1213 19:15:15.870012   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:15.870069   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.873941   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:15.874036   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:15.906204   92925 cri.go:89] found id: ""
	I1213 19:15:15.906268   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.906283   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:15.906293   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:15.906305   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:16.002221   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:16.002261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:16.030993   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:16.031024   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:16.078933   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:16.078967   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:16.173955   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:16.174010   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:16.207960   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:16.207989   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:16.221095   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:16.221124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:16.290865   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:16.280288   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.281366   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.282142   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.283740   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.284314   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:16.280288   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.281366   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.282142   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.283740   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.284314   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:16.290940   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:16.290969   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:16.330431   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:16.330462   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:16.403747   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:16.403785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:16.435000   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:16.435076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:18.967118   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:18.978473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:18.978548   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:19.009416   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:19.009442   92925 cri.go:89] found id: ""
	I1213 19:15:19.009450   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:19.009506   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.013229   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:19.013304   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:19.046195   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:19.046217   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:19.046221   92925 cri.go:89] found id: ""
	I1213 19:15:19.046228   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:19.046284   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.050380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.055287   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:19.055364   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:19.084697   92925 cri.go:89] found id: ""
	I1213 19:15:19.084724   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.084734   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:19.084740   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:19.084799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:19.134188   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:19.134212   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:19.134217   92925 cri.go:89] found id: ""
	I1213 19:15:19.134225   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:19.134281   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.139452   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.143380   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:19.143515   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:19.176707   92925 cri.go:89] found id: ""
	I1213 19:15:19.176733   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.176742   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:19.176748   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:19.176808   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:19.205658   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:19.205681   92925 cri.go:89] found id: ""
	I1213 19:15:19.205689   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:19.205769   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.209480   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:19.209556   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:19.236187   92925 cri.go:89] found id: ""
	I1213 19:15:19.236210   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.236219   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:19.236227   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:19.236239   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:19.335347   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:19.335384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:19.347594   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:19.347622   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:19.423749   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:19.415662   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.416536   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418222   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418572   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.420106   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:19.415662   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.416536   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418222   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418572   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.420106   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:19.423773   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:19.423785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:19.458293   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:19.458322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:19.491891   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:19.491981   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:19.532203   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:19.532289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:19.572383   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:19.572416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:19.623843   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:19.623878   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:19.701590   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:19.701669   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:19.730646   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:19.730674   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:22.313136   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:22.324070   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:22.324192   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:22.354911   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:22.354936   92925 cri.go:89] found id: ""
	I1213 19:15:22.354944   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:22.355017   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.359138   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:22.359232   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:22.387533   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:22.387553   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:22.387559   92925 cri.go:89] found id: ""
	I1213 19:15:22.387567   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:22.387622   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.391451   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.395283   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:22.395396   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:22.424307   92925 cri.go:89] found id: ""
	I1213 19:15:22.424330   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.424338   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:22.424345   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:22.424406   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:22.453085   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:22.453146   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:22.453167   92925 cri.go:89] found id: ""
	I1213 19:15:22.453192   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:22.453265   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.457420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.461164   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:22.461238   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:22.491907   92925 cri.go:89] found id: ""
	I1213 19:15:22.491930   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.491939   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:22.491944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:22.492029   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:22.527521   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:22.527588   92925 cri.go:89] found id: ""
	I1213 19:15:22.527615   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:22.527710   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.531946   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:22.532027   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:22.559453   92925 cri.go:89] found id: ""
	I1213 19:15:22.559480   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.559499   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:22.559510   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:22.559522   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:22.601772   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:22.601808   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:22.649158   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:22.649193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:22.676639   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:22.676667   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:22.777850   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:22.777888   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:22.851444   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:22.842501   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.843358   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.845491   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.846536   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.847439   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:22.842501   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.843358   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.845491   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.846536   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.847439   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:22.851468   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:22.851480   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:22.933320   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:22.933358   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:22.962559   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:22.962589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:23.059725   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:23.059803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:23.109255   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:23.109286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:23.122814   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:23.122844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:25.651780   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:25.662957   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:25.663032   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:25.696971   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:25.696993   92925 cri.go:89] found id: ""
	I1213 19:15:25.697001   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:25.697087   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.701838   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:25.701919   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:25.738295   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:25.738373   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:25.738386   92925 cri.go:89] found id: ""
	I1213 19:15:25.738395   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:25.738459   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.742364   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.746297   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:25.746400   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:25.772105   92925 cri.go:89] found id: ""
	I1213 19:15:25.772178   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.772201   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:25.772221   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:25.772305   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:25.799458   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:25.799526   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:25.799546   92925 cri.go:89] found id: ""
	I1213 19:15:25.799570   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:25.799645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.803647   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.807583   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:25.807695   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:25.834975   92925 cri.go:89] found id: ""
	I1213 19:15:25.835051   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.835066   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:25.835073   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:25.835133   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:25.864722   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:25.864769   92925 cri.go:89] found id: ""
	I1213 19:15:25.864778   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:25.864836   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.868764   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:25.868838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:25.897111   92925 cri.go:89] found id: ""
	I1213 19:15:25.897133   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.897141   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:25.897162   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:25.897174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:26.007072   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:26.007104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:26.025166   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:26.025201   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:26.111354   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:26.097401   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.097781   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105030   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105458   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.107065   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:26.097401   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.097781   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105030   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105458   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.107065   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:26.111374   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:26.111387   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:26.141476   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:26.141507   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:26.169374   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:26.169404   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:26.246093   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:26.246133   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:26.297802   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:26.297829   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:26.325154   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:26.325182   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:26.368489   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:26.368524   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:26.414072   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:26.414110   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.001164   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:29.013204   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:29.013272   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:29.047888   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:29.047909   92925 cri.go:89] found id: ""
	I1213 19:15:29.047918   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:29.047982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.051890   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:29.051971   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:29.077464   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:29.077486   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:29.077490   92925 cri.go:89] found id: ""
	I1213 19:15:29.077498   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:29.077553   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.081462   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.084988   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:29.085157   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:29.115595   92925 cri.go:89] found id: ""
	I1213 19:15:29.115621   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.115631   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:29.115637   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:29.115697   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:29.160656   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.160729   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:29.160748   92925 cri.go:89] found id: ""
	I1213 19:15:29.160772   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:29.160853   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.165160   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.168775   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:29.168891   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:29.199867   92925 cri.go:89] found id: ""
	I1213 19:15:29.199890   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.199899   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:29.199911   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:29.200009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:29.226478   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:29.226502   92925 cri.go:89] found id: ""
	I1213 19:15:29.226511   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:29.226565   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.230306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:29.230382   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:29.260973   92925 cri.go:89] found id: ""
	I1213 19:15:29.260999   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.261034   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:29.261044   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:29.261060   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:29.288533   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:29.288560   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:29.317072   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:29.317145   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:29.343899   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:29.343926   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:29.424466   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:29.424502   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:29.437265   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:29.437314   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:29.525751   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:29.505457   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.506350   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.518441   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.520261   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.521214   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:29.505457   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.506350   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.518441   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.520261   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.521214   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:29.525774   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:29.525787   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:29.565912   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:29.565947   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:29.614921   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:29.614962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.695191   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:29.695229   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:29.726876   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:29.726907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:32.331342   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:32.342123   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:32.342193   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:32.377492   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:32.377512   92925 cri.go:89] found id: ""
	I1213 19:15:32.377520   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:32.377603   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.381461   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:32.381535   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:32.408828   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:32.408849   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:32.408853   92925 cri.go:89] found id: ""
	I1213 19:15:32.408861   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:32.408913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.412666   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.416683   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:32.416757   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:32.444710   92925 cri.go:89] found id: ""
	I1213 19:15:32.444734   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.444744   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:32.444750   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:32.444842   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:32.470813   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:32.470834   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:32.470839   92925 cri.go:89] found id: ""
	I1213 19:15:32.470846   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:32.470904   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.474746   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.478110   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:32.478180   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:32.505590   92925 cri.go:89] found id: ""
	I1213 19:15:32.505616   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.505625   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:32.505630   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:32.505685   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:32.534851   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:32.534873   92925 cri.go:89] found id: ""
	I1213 19:15:32.534882   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:32.534942   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.538913   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:32.539005   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:32.570980   92925 cri.go:89] found id: ""
	I1213 19:15:32.571020   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.571029   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:32.571055   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:32.571075   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:32.672697   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:32.672739   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:32.685325   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:32.685360   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:32.762805   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:32.754695   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.755445   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.756898   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.757344   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.759247   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:32.754695   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.755445   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.756898   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.757344   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.759247   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:32.762877   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:32.762899   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:32.788216   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:32.788243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:32.831764   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:32.831797   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:32.861451   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:32.861481   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:32.889040   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:32.889113   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:32.962682   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:32.962721   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:33.005926   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:33.005963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:33.113066   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:33.113100   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:35.646466   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:35.657328   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:35.657400   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:35.682772   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:35.682796   92925 cri.go:89] found id: ""
	I1213 19:15:35.682805   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:35.682862   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.686943   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:35.687017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:35.713394   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:35.713426   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:35.713433   92925 cri.go:89] found id: ""
	I1213 19:15:35.713440   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:35.713492   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.717236   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.720957   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:35.721060   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:35.747062   92925 cri.go:89] found id: ""
	I1213 19:15:35.747139   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.747155   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:35.747162   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:35.747223   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:35.780788   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:35.780809   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:35.780814   92925 cri.go:89] found id: ""
	I1213 19:15:35.780822   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:35.780877   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.784913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.788950   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:35.789084   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:35.817183   92925 cri.go:89] found id: ""
	I1213 19:15:35.817206   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.817217   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:35.817223   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:35.817285   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:35.844649   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:35.844674   92925 cri.go:89] found id: ""
	I1213 19:15:35.844682   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:35.844741   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.848694   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:35.848772   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:35.880264   92925 cri.go:89] found id: ""
	I1213 19:15:35.880293   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.880302   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:35.880311   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:35.880323   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:35.928133   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:35.928168   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:36.005056   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:36.005095   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:36.088199   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:36.088234   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:36.195615   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:36.195657   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:36.222570   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:36.222597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:36.253158   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:36.253189   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:36.282294   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:36.282324   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:36.315027   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:36.315057   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:36.327415   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:36.327445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:36.397770   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:36.388485   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.389249   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.391121   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392189   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392759   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:36.388485   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.389249   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.391121   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392189   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392759   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:36.397793   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:36.397809   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:38.950291   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:38.966129   92925 out.go:203] 
	W1213 19:15:38.969186   92925 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 19:15:38.969230   92925 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 19:15:38.969244   92925 out.go:285] * Related issues:
	W1213 19:15:38.969256   92925 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 19:15:38.969271   92925 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 19:15:38.972406   92925 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.008646414Z" level=info msg="Started container" PID=1413 containerID=162b495909eae3cb5f079d5fd260e61e560cd11212e69ad52138f4180f770a5b description=kube-system/storage-provisioner/storage-provisioner id=78f061d7-6d54-48f8-b513-d5c320e8e810 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b4d0206cec1a1b4c0b5752a4babdaf8710471f5502067896b44e2d2df0c4d5b
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.011070102Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=d15204a7-37cc-4d8c-a231-166dcd68a520 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.012539045Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=6b3690d3-7f7d-43f9-95f1-1cd8e6e953ff name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.02550851Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-85rpk/coredns" id=ac3e351b-9839-445c-b06c-72f089234671 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.025812066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.048513937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.049307526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.073222358Z" level=info msg="Created container 98620d4f3c674bb9bab6e41c90c32e2b069e67c18730baafb91af41ae8c19bcf: default/busybox-7b57f96db7-h5qqv/busybox" id=3c28fa9a-be33-4fec-ad16-52c4765c6b6f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.082412808Z" level=info msg="Starting container: 98620d4f3c674bb9bab6e41c90c32e2b069e67c18730baafb91af41ae8c19bcf" id=7ee27ecf-6fea-48b9-9feb-9cb5f5270b26 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.109207129Z" level=info msg="Started container" PID=1422 containerID=98620d4f3c674bb9bab6e41c90c32e2b069e67c18730baafb91af41ae8c19bcf description=default/busybox-7b57f96db7-h5qqv/busybox id=7ee27ecf-6fea-48b9-9feb-9cb5f5270b26 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3641321fd538fed941abd3cee5bdec42be3fbe581a0a743eea30ee6edf2692ee
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.121281524Z" level=info msg="Created container 511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505: kube-system/coredns-66bc5c9577-85rpk/coredns" id=ac3e351b-9839-445c-b06c-72f089234671 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.122743263Z" level=info msg="Starting container: 511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505" id=4e4e597f-bb09-435f-a3da-58627ddb7595 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.124507425Z" level=info msg="Started container" PID=1433 containerID=511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505 description=kube-system/coredns-66bc5c9577-85rpk/coredns id=4e4e597f-bb09-435f-a3da-58627ddb7595 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.122399466Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.129604955Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.129827191Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.129946091Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.139648811Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.139699543Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.139727531Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.147861576Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.148118551Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.148270222Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.153836563Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.154024681Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	511836b213244       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   1d4641fc3fdac       coredns-66bc5c9577-85rpk            kube-system
	98620d4f3c674       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   2                   3641321fd538f       busybox-7b57f96db7-h5qqv            default
	162b495909eae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       4                   3b4d0206cec1a       storage-provisioner                 kube-system
	167e9e0789f86       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   7 minutes ago       Running             kube-controller-manager   7                   c35b44e70d6d7       kube-controller-manager-ha-605114   kube-system
	7bc9cb09a081e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   8 minutes ago       Exited              kube-controller-manager   6                   c35b44e70d6d7       kube-controller-manager-ha-605114   kube-system
	76f4d2ef7a334       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   9 minutes ago       Running             kube-vip                  3                   6e0df90fd1fab       kube-vip-ha-605114                  kube-system
	7db7b17ab2144       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   9 minutes ago       Running             coredns                   2                   d895cdca857a1       coredns-66bc5c9577-rc9qg            kube-system
	adb6a0d2cd304       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   9 minutes ago       Running             kube-proxy                2                   511ce74a57340       kube-proxy-c6t4v                    kube-system
	f1a416886d288       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   9 minutes ago       Running             kindnet-cni               2                   e61041a4c5e3e       kindnet-dtnb7                       kube-system
	9a81ddd488bb7       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   9 minutes ago       Running             etcd                      2                   a40bba21dff67       etcd-ha-605114                      kube-system
	ee202abc8dba3       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   9 minutes ago       Running             kube-scheduler            2                   5a646569f389f       kube-scheduler-ha-605114            kube-system
	3c729bb1538bf       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   9 minutes ago       Running             kube-apiserver            2                   390331a7238b2       kube-apiserver-ha-605114            kube-system
	2b3744a5aa7a9       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   9 minutes ago       Exited              kube-vip                  2                   6e0df90fd1fab       kube-vip-ha-605114                  kube-system
	
	
	==> coredns [511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60720 - 44913 "HINFO IN 3829035828325911617.4912160736216291985. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012907336s
	
	
	==> coredns [7db7b17ab2144a863bb29b6e2f750b6eb865e786cf824a74c0b415ac4077800a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58025 - 60628 "HINFO IN 3868133962360849883.307927823530690311. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.054923758s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-605114
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T18_59_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 18:59:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:17:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 18:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 18:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 18:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 19:00:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-605114
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                8ff9857c-e2f0-4d86-9970-2f9e1bad48df
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-h5qqv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-85rpk             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 coredns-66bc5c9577-rc9qg             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-ha-605114                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-dtnb7                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-605114             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-605114    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-c6t4v                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-605114             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-605114                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 9m13s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Warning  CgroupV1                 18m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     18m (x8 over 18m)      kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)      kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)      kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   Starting                 18m                    kubelet          Starting kubelet.
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           17m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-605114 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)      kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   Starting                 9m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m25s (x8 over 9m25s)  kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m36s                  node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           50s                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	
	
	Name:               ha-605114-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_13T19_00_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 19:00:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:07:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-605114-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                c9a90528-cc46-44be-a006-2245d1e8d275
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-gqp98                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 etcd-ha-605114-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-hxgh6                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-605114-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-605114-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-87qlc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-605114-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-605114-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   RegisteredNode           17m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-605114-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeNotReady             12m                node-controller  Node ha-605114-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-605114-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           7m36s              node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   NodeNotReady             6m45s              node-controller  Node ha-605114-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           50s                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	
	
	Name:               ha-605114-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_13T19_02_38_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 19:02:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:07:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-605114-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                1710ae92-5ee6-4178-a2ff-b2523f5ef2e1
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wl925    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kindnet-9xnpk               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-proxy-lqp4f            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 14m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m (x3 over 14m)  kubelet          Node ha-605114-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x3 over 14m)  kubelet          Node ha-605114-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  14m (x3 over 14m)  kubelet          Node ha-605114-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           14m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   NodeReady                13m                kubelet          Node ha-605114-m04 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-605114-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-605114-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-605114-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           7m36s              node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   NodeNotReady             6m45s              node-controller  Node ha-605114-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           50s                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	
	
	Name:               ha-605114-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_13T19_16_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 19:16:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114-m05
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:17:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 19:17:15 +0000   Sat, 13 Dec 2025 19:16:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 19:17:15 +0000   Sat, 13 Dec 2025 19:16:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 19:17:15 +0000   Sat, 13 Dec 2025 19:16:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 19:17:15 +0000   Sat, 13 Dec 2025 19:17:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-605114-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                d79d0921-9cb8-408f-9cee-594e7d75ae84
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6ldgc                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 etcd-ha-605114-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         46s
	  kube-system                 kindnet-c6v4q                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      49s
	  kube-system                 kube-apiserver-ha-605114-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-controller-manager-ha-605114-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-proxy-5h27j                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-scheduler-ha-605114-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-vip-ha-605114-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        44s   kube-proxy       
	  Normal  RegisteredNode  45s   node-controller  Node ha-605114-m05 event: Registered Node ha-605114-m05 in Controller
	  Normal  RegisteredNode  45s   node-controller  Node ha-605114-m05 event: Registered Node ha-605114-m05 in Controller
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	[Dec13 18:42] overlayfs: idmapped layers are currently not supported
	[Dec13 18:59] overlayfs: idmapped layers are currently not supported
	[ +33.753607] overlayfs: idmapped layers are currently not supported
	[Dec13 19:01] overlayfs: idmapped layers are currently not supported
	[Dec13 19:02] overlayfs: idmapped layers are currently not supported
	[Dec13 19:03] overlayfs: idmapped layers are currently not supported
	[Dec13 19:05] overlayfs: idmapped layers are currently not supported
	[  +4.041925] overlayfs: idmapped layers are currently not supported
	[ +36.958854] overlayfs: idmapped layers are currently not supported
	[Dec13 19:06] overlayfs: idmapped layers are currently not supported
	[Dec13 19:07] overlayfs: idmapped layers are currently not supported
	[  +4.088622] overlayfs: idmapped layers are currently not supported
	[Dec13 19:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9a81ddd488bb7e9ca9d20cc8af4e9414463f3bf2bd40edd26c2e9395f731a3ec] <==
	{"level":"warn","ts":"2025-12-13T19:16:17.142786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:50648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:16:17.194193Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:16:17.195086Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:16:17.230885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:50670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:16:17.256346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:50676","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T19:16:17.338622Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7"}
	{"level":"warn","ts":"2025-12-13T19:16:17.429365Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"5009f1552d554ae7","error":"failed to write 5009f1552d554ae7 on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:55896: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-13T19:16:17.429468Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7"}
	{"level":"info","ts":"2025-12-13T19:16:17.513505Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5009f1552d554ae7","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-13T19:16:17.513549Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"5009f1552d554ae7"}
	{"level":"info","ts":"2025-12-13T19:16:17.513563Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7"}
	{"level":"warn","ts":"2025-12-13T19:16:17.515698Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"5009f1552d554ae7","error":"failed to write 5009f1552d554ae7 on stream MsgApp v2 (write tcp 192.168.49.2:2380->192.168.49.6:55890: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-13T19:16:17.515782Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7"}
	{"level":"info","ts":"2025-12-13T19:16:17.623903Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"5009f1552d554ae7"}
	{"level":"info","ts":"2025-12-13T19:16:17.623955Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7"}
	{"level":"info","ts":"2025-12-13T19:16:17.724525Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5009f1552d554ae7","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-13T19:16:17.724569Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7"}
	{"level":"warn","ts":"2025-12-13T19:16:19.304613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:53116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:16:21.341113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:53134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:16:23.365561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:53150","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T19:16:30.482933Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-13T19:16:35.825998Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-13T19:16:46.989497Z","caller":"etcdserver/server.go:1872","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"5009f1552d554ae7","bytes":6821221,"size":"6.8 MB","took":"30.436888515s"}
	{"level":"warn","ts":"2025-12-13T19:17:19.051923Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.73027ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:367897"}
	{"level":"info","ts":"2025-12-13T19:17:19.051986Z","caller":"traceutil/trace.go:172","msg":"trace[1872679937] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:3793; }","duration":"117.816407ms","start":"2025-12-13T19:17:18.934159Z","end":"2025-12-13T19:17:19.051975Z","steps":["trace[1872679937] 'range keys from bolt db'  (duration: 116.686715ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:17:19 up  1:59,  0 user,  load average: 0.82, 1.19, 1.33
	Linux ha-605114 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f1a416886d288f33359cd21dacc737dbed6a3c975d9323a89f8c93828c040431] <==
	I1213 19:16:45.158434       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:16:55.125153       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:16:55.125252       1 main.go:301] handling current node
	I1213 19:16:55.125290       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:16:55.125322       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:16:55.125503       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:16:55.125564       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:16:55.125656       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1213 19:16:55.125673       1 main.go:324] Node ha-605114-m05 has CIDR [10.244.2.0/24] 
	I1213 19:17:05.130684       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:17:05.130741       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:17:05.130977       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:17:05.130996       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:17:05.131090       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1213 19:17:05.131105       1 main.go:324] Node ha-605114-m05 has CIDR [10.244.2.0/24] 
	I1213 19:17:05.131319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:17:05.131399       1 main.go:301] handling current node
	I1213 19:17:15.121782       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:17:15.121872       1 main.go:301] handling current node
	I1213 19:17:15.121890       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:17:15.121896       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:17:15.122089       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:17:15.122114       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:17:15.122220       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1213 19:17:15.122232       1 main.go:324] Node ha-605114-m05 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [3c729bb1538bfb45bc9b5542f5524916c96b118344d2be8a42e58a0bc6d4cb0d] <==
	{"level":"warn","ts":"2025-12-13T19:09:39.225607Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012ff680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225637Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014ec3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225654Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029a8780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225669Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fc780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225684Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fd2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231292Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fc1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231412Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019832c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231467Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001982000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231521Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400103ad20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231578Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019b2000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231633Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001f0bc20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231700Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231767Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231831Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028461e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231883Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028461e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231933Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028461e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231988Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001bfa5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.232044Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001bfa5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	W1213 19:09:41.980970       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1213 19:09:41.982698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 19:09:41.995308       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 19:09:44.281972       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 19:09:52.543985       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 19:10:34.144307       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 19:10:34.189645       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [167e9e0789f864655d959c63fd731257c88aa1e1b22515ec35f4a07af4678202] <==
	E1213 19:10:23.979852       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979884       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979949       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979979       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	I1213 19:10:24.001195       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-605114-m03"
	I1213 19:10:24.044627       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-605114-m03"
	I1213 19:10:24.044809       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-605114-m03"
	I1213 19:10:24.081792       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-605114-m03"
	I1213 19:10:24.081903       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-605114-m03"
	I1213 19:10:24.149160       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-605114-m03"
	I1213 19:10:24.149272       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-605114-m03"
	I1213 19:10:24.187394       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-605114-m03"
	I1213 19:10:24.187500       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4kfpv"
	I1213 19:10:24.241495       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4kfpv"
	I1213 19:10:24.241622       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5m48f"
	I1213 19:10:24.284484       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5m48f"
	I1213 19:10:24.284851       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-605114-m03"
	I1213 19:10:24.328812       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-605114-m03"
	I1213 19:15:34.087612       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-wl925"
	I1213 19:15:44.076408       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-gqp98"
	I1213 19:16:30.485685       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-605114-m05\" does not exist"
	I1213 19:16:30.546704       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-605114-m05" podCIDRs=["10.244.2.0/24"]
	I1213 19:16:34.286406       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-605114-m05"
	I1213 19:16:34.286778       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1213 19:17:19.294604       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-controller-manager [7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773] <==
	I1213 19:08:49.567762       1 serving.go:386] Generated self-signed cert in-memory
	I1213 19:08:50.364508       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1213 19:08:50.364608       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:08:50.366449       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 19:08:50.366623       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 19:08:50.366938       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1213 19:08:50.366991       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 19:09:04.386470       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [adb6a0d2cd30435f1f392f09033a5ad40b3f1d3a5a2f1fe0d2ae76a50bf8f3b4] <==
	I1213 19:08:50.244883       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	E1213 19:08:50.246471       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": http2: client connection lost"
	E1213 19:08:54.165411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:08:54.165542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:08:54.165634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:08:54.165741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:08:57.237395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:08:57.237414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:08:57.237660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:08:57.237667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:09:03.989710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:09:03.989962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:09:03.990083       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1213 19:09:03.990245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:09:03.990394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:09:15.029488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:09:15.029488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:09:15.029671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:09:15.029765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:09:18.101424       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1213 19:09:31.797443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:09:31.797538       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1213 19:09:31.797646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:09:34.869405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:09:42.229400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	
	
	==> kube-scheduler [ee202abc8dba3b97ac56d7c3063ce4fae0734134ba47b9d6070588c897f7baf0] <==
	E1213 19:08:02.527700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 19:08:02.527776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 19:08:02.527848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 19:08:02.527900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 19:08:02.527911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 19:08:02.527950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 19:08:02.528002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 19:08:02.528106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 19:08:02.528181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 19:08:02.528340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 19:08:02.528402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 19:08:03.355200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 19:08:03.375752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 19:08:03.384341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 19:08:03.496281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 19:08:03.527514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:08:03.564170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 19:08:03.604860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 19:08:03.609546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 19:08:03.663151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 19:08:03.683755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 19:08:03.838837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 19:08:03.901316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 19:08:03.901563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1213 19:08:06.412915       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 19:09:04 ha-605114 kubelet[806]: I1213 19:09:04.239034     806 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Dec 13 19:09:04 ha-605114 kubelet[806]: E1213 19:09:04.524602     806 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods coredns-66bc5c9577-rc9qg)" podUID="0f2b52ea-d2f2-4307-8a52-619a737c2611" pod="kube-system/coredns-66bc5c9577-rc9qg"
	Dec 13 19:09:04 ha-605114 kubelet[806]: I1213 19:09:04.666266     806 scope.go:117] "RemoveContainer" containerID="38e10b9deae562bcc475d6b257111633953b93aa5e59b05a1a5aaca29705804b"
	Dec 13 19:09:04 ha-605114 kubelet[806]: I1213 19:09:04.666833     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:04 ha-605114 kubelet[806]: E1213 19:09:04.667006     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-605114_kube-system(6b36430ebbfe01869fc54848b2e1c2a9)\"" pod="kube-system/kube-controller-manager-ha-605114" podUID="6b36430ebbfe01869fc54848b2e1c2a9"
	Dec 13 19:09:05 ha-605114 kubelet[806]: E1213 19:09:05.059732     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"re
cursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"ha-605114\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-605114/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:06 ha-605114 kubelet[806]: I1213 19:09:06.894025     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:06 ha-605114 kubelet[806]: E1213 19:09:06.894244     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-605114_kube-system(6b36430ebbfe01869fc54848b2e1c2a9)\"" pod="kube-system/kube-controller-manager-ha-605114" podUID="6b36430ebbfe01869fc54848b2e1c2a9"
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.933737     806 projected.go:196] Error preparing data for projected volume kube-api-access-sctl2 for pod kube-system/storage-provisioner: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.933838     806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2bdd28fc-c3f6-401d-9328-27dc669e196a-kube-api-access-sctl2 podName:2bdd28fc-c3f6-401d-9328-27dc669e196a nodeName:}" failed. No retries permitted until 2025-12-13 19:09:13.933816541 +0000 UTC m=+79.712758196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sctl2" (UniqueName: "kubernetes.io/projected/2bdd28fc-c3f6-401d-9328-27dc669e196a-kube-api-access-sctl2") pod "storage-provisioner" (UID: "2bdd28fc-c3f6-401d-9328-27dc669e196a") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934020     806 projected.go:196] Error preparing data for projected volume kube-api-access-4p9km for pod kube-system/coredns-66bc5c9577-85rpk: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934081     806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d7650f5f-c93c-4824-98ba-c6242f1d9595-kube-api-access-4p9km podName:d7650f5f-c93c-4824-98ba-c6242f1d9595 nodeName:}" failed. No retries permitted until 2025-12-13 19:09:13.934068028 +0000 UTC m=+79.713009674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4p9km" (UniqueName: "kubernetes.io/projected/d7650f5f-c93c-4824-98ba-c6242f1d9595-kube-api-access-4p9km") pod "coredns-66bc5c9577-85rpk" (UID: "d7650f5f-c93c-4824-98ba-c6242f1d9595") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934128     806 projected.go:196] Error preparing data for projected volume kube-api-access-rtb9w for pod default/busybox-7b57f96db7-h5qqv: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934157     806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b89d6cc7-836d-44be-997e-9a7fe221a5d8-kube-api-access-rtb9w podName:b89d6cc7-836d-44be-997e-9a7fe221a5d8 nodeName:}" failed. No retries permitted until 2025-12-13 19:09:13.934149422 +0000 UTC m=+79.713091069 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rtb9w" (UniqueName: "kubernetes.io/projected/b89d6cc7-836d-44be-997e-9a7fe221a5d8-kube-api-access-rtb9w") pod "busybox-7b57f96db7-h5qqv" (UID: "b89d6cc7-836d-44be-997e-9a7fe221a5d8") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:14 ha-605114 kubelet[806]: E1213 19:09:14.239262     806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-605114?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="200ms"
	Dec 13 19:09:15 ha-605114 kubelet[806]: E1213 19:09:15.060662     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-605114\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-605114?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:17 ha-605114 kubelet[806]: I1213 19:09:17.413956     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:17 ha-605114 kubelet[806]: E1213 19:09:17.414150     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-605114_kube-system(6b36430ebbfe01869fc54848b2e1c2a9)\"" pod="kube-system/kube-controller-manager-ha-605114" podUID="6b36430ebbfe01869fc54848b2e1c2a9"
	Dec 13 19:09:19 ha-605114 kubelet[806]: E1213 19:09:19.556378     806 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-605114.1880dbef376d6535  default   2620 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-605114,UID:ha-605114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-605114 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-605114,},FirstTimestamp:2025-12-13 19:07:54 +0000 UTC,LastTimestamp:2025-12-13 19:07:54.517705313 +0000 UTC m=+0.296646960,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-605114,}"
	Dec 13 19:09:24 ha-605114 kubelet[806]: E1213 19:09:24.441298     806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-605114?timeout=10s\": context deadline exceeded" interval="400ms"
	Dec 13 19:09:25 ha-605114 kubelet[806]: E1213 19:09:25.061462     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-605114\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-605114?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:31 ha-605114 kubelet[806]: I1213 19:09:31.414094     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:34 ha-605114 kubelet[806]: E1213 19:09:34.844103     806 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io ha-605114)" interval="800ms"
	Dec 13 19:09:35 ha-605114 kubelet[806]: E1213 19:09:35.061741     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-605114\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-605114?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:39 ha-605114 kubelet[806]: W1213 19:09:39.981430     806 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/crio-1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4 WatchSource:0}: Error finding container 1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4: Status 404 returned error can't find the container with id 1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-605114 -n ha-605114
helpers_test.go:270: (dbg) Run:  kubectl --context ha-605114 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-7b57f96db7-jxpf7
helpers_test.go:283: ======> post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context ha-605114 describe pod busybox-7b57f96db7-jxpf7
helpers_test.go:291: (dbg) kubectl --context ha-605114 describe pod busybox-7b57f96db7-jxpf7:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-jxpf7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-696pr (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-696pr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  52s (x5 over 97s)  default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  52s (x5 over 56s)  default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  50s (x3 over 51s)  default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  50s (x3 over 51s)  default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  6s                 default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  6s                 default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:294: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (91.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (5.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:309: expected profile "ha-605114" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-605114\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-605114\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-605114\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-devi
ce-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":
false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-605114
helpers_test.go:244: (dbg) docker inspect ha-605114:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01",
	        "Created": "2025-12-13T18:58:54.586877202Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 93050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T19:07:47.614428932Z",
	            "FinishedAt": "2025-12-13T19:07:46.864889381Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/hosts",
	        "LogPath": "/var/lib/docker/containers/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01-json.log",
	        "Name": "/ha-605114",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-605114:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-605114",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01",
	                "LowerDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8397f5133759b005c7933e08a612b6b8947df33c29226cae46c5c83d03247aff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-605114",
	                "Source": "/var/lib/docker/volumes/ha-605114/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-605114",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-605114",
	                "name.minikube.sigs.k8s.io": "ha-605114",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c9ba4aac7e27f5373688f6fc1a7a905972eca17b43555a3811eba451288f742",
	            "SandboxKey": "/var/run/docker/netns/7c9ba4aac7e2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32833"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32834"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32836"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-605114": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:0b:16:d7:dc:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a2f3617b1da5e979c171e0e32faeb143b6ffd1484ed485ce26cb0c66c2f2f8d4",
	                    "EndpointID": "ad19576bfc7fdb2d25ff186edf415bfaa77021d19f2378c0078a6b8dd2c2a121",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-605114",
	                        "b8b77eca4604"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-605114 -n ha-605114
helpers_test.go:253: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 logs -n 25: (2.445615807s)
helpers_test.go:261: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-605114 ssh -n ha-605114-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test_ha-605114-m03_ha-605114-m04.txt                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp testdata/cp-test.txt ha-605114-m04:/home/docker/cp-test.txt                                                             │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1407969839/001/cp-test_ha-605114-m04.txt │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114:/home/docker/cp-test_ha-605114-m04_ha-605114.txt                       │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114 sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114.txt                                                 │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114-m02:/home/docker/cp-test_ha-605114-m04_ha-605114-m02.txt               │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m02 sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114-m02.txt                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ cp      │ ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114-m03:/home/docker/cp-test_ha-605114-m04_ha-605114-m03.txt               │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ ssh     │ ha-605114 ssh -n ha-605114-m03 sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114-m03.txt                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ node    │ ha-605114 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:03 UTC │
	│ node    │ ha-605114 node start m02 --alsologtostderr -v 5                                                                                      │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:03 UTC │ 13 Dec 25 19:04 UTC │
	│ node    │ ha-605114 node list --alsologtostderr -v 5                                                                                           │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:04 UTC │                     │
	│ stop    │ ha-605114 stop --alsologtostderr -v 5                                                                                                │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:04 UTC │ 13 Dec 25 19:05 UTC │
	│ start   │ ha-605114 start --wait true --alsologtostderr -v 5                                                                                   │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:05 UTC │ 13 Dec 25 19:06 UTC │
	│ node    │ ha-605114 node list --alsologtostderr -v 5                                                                                           │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:06 UTC │                     │
	│ node    │ ha-605114 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:06 UTC │ 13 Dec 25 19:07 UTC │
	│ stop    │ ha-605114 stop --alsologtostderr -v 5                                                                                                │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:07 UTC │ 13 Dec 25 19:07 UTC │
	│ start   │ ha-605114 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:07 UTC │                     │
	│ node    │ ha-605114 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-605114 │ jenkins │ v1.37.0 │ 13 Dec 25 19:15 UTC │ 13 Dec 25 19:17 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 19:07:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:07:47.349427   92925 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:07:47.349751   92925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:07:47.349782   92925 out.go:374] Setting ErrFile to fd 2...
	I1213 19:07:47.349805   92925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:07:47.350088   92925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:07:47.350503   92925 out.go:368] Setting JSON to false
	I1213 19:07:47.351372   92925 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6620,"bootTime":1765646248,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 19:07:47.351472   92925 start.go:143] virtualization:  
	I1213 19:07:47.357175   92925 out.go:179] * [ha-605114] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 19:07:47.360285   92925 notify.go:221] Checking for updates...
	I1213 19:07:47.363188   92925 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 19:07:47.366066   92925 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:07:47.368997   92925 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:47.371939   92925 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 19:07:47.374564   92925 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 19:07:47.377424   92925 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:07:47.380815   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:47.381472   92925 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 19:07:47.411852   92925 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 19:07:47.411970   92925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:07:47.470115   92925 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-13 19:07:47.460445366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:07:47.470224   92925 docker.go:319] overlay module found
	I1213 19:07:47.473192   92925 out.go:179] * Using the docker driver based on existing profile
	I1213 19:07:47.475964   92925 start.go:309] selected driver: docker
	I1213 19:07:47.475980   92925 start.go:927] validating driver "docker" against &{Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:47.476125   92925 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:07:47.476235   92925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:07:47.532110   92925 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-13 19:07:47.522555398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:07:47.532550   92925 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:07:47.532582   92925 cni.go:84] Creating CNI manager for ""
	I1213 19:07:47.532636   92925 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1213 19:07:47.532689   92925 start.go:353] cluster config:
	{Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:47.537457   92925 out.go:179] * Starting "ha-605114" primary control-plane node in "ha-605114" cluster
	I1213 19:07:47.540151   92925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:07:47.542975   92925 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:07:47.545679   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:47.545731   92925 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 19:07:47.545743   92925 cache.go:65] Caching tarball of preloaded images
	I1213 19:07:47.545753   92925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:07:47.545828   92925 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:07:47.545838   92925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 19:07:47.545971   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:47.565319   92925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:07:47.565343   92925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:07:47.565364   92925 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:07:47.565392   92925 start.go:360] acquireMachinesLock for ha-605114: {Name:mk8d2cbed975abcdd5664438df80622381a361a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:07:47.565456   92925 start.go:364] duration metric: took 41.903µs to acquireMachinesLock for "ha-605114"
	I1213 19:07:47.565477   92925 start.go:96] Skipping create...Using existing machine configuration
	I1213 19:07:47.565483   92925 fix.go:54] fixHost starting: 
	I1213 19:07:47.565741   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:07:47.581688   92925 fix.go:112] recreateIfNeeded on ha-605114: state=Stopped err=<nil>
	W1213 19:07:47.581717   92925 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 19:07:47.584947   92925 out.go:252] * Restarting existing docker container for "ha-605114" ...
	I1213 19:07:47.585046   92925 cli_runner.go:164] Run: docker start ha-605114
	I1213 19:07:47.865372   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:07:47.883933   92925 kic.go:430] container "ha-605114" state is running.
	I1213 19:07:47.884352   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:47.906511   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:47.906746   92925 machine.go:94] provisionDockerMachine start ...
	I1213 19:07:47.906805   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:47.930498   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:47.930829   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:47.930842   92925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:07:47.931376   92925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46728->127.0.0.1:32833: read: connection reset by peer
	I1213 19:07:51.084950   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114
	
	I1213 19:07:51.084978   92925 ubuntu.go:182] provisioning hostname "ha-605114"
	I1213 19:07:51.085064   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.103183   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.103509   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.103523   92925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-605114 && echo "ha-605114" | sudo tee /etc/hostname
	I1213 19:07:51.262962   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114
	
	I1213 19:07:51.263080   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.281758   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.282067   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.282093   92925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-605114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-605114/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-605114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:07:51.433225   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:07:51.433251   92925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:07:51.433276   92925 ubuntu.go:190] setting up certificates
	I1213 19:07:51.433294   92925 provision.go:84] configureAuth start
	I1213 19:07:51.433356   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:51.451056   92925 provision.go:143] copyHostCerts
	I1213 19:07:51.451109   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:51.451157   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:07:51.451169   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:51.451244   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:07:51.451330   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:51.451351   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:07:51.451359   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:51.451387   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:07:51.451438   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:51.451459   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:07:51.451473   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:51.451505   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:07:51.451557   92925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.ha-605114 san=[127.0.0.1 192.168.49.2 ha-605114 localhost minikube]
	I1213 19:07:51.562646   92925 provision.go:177] copyRemoteCerts
	I1213 19:07:51.562709   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:07:51.562753   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.579816   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:51.684734   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 19:07:51.684815   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:07:51.703545   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 19:07:51.703625   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1213 19:07:51.721319   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 19:07:51.721382   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 19:07:51.738806   92925 provision.go:87] duration metric: took 305.496623ms to configureAuth
	I1213 19:07:51.738832   92925 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:07:51.739059   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:51.739152   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:51.756183   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:51.756478   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1213 19:07:51.756493   92925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:07:52.176419   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:07:52.176439   92925 machine.go:97] duration metric: took 4.269683244s to provisionDockerMachine
	I1213 19:07:52.176449   92925 start.go:293] postStartSetup for "ha-605114" (driver="docker")
	I1213 19:07:52.176460   92925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:07:52.176518   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:07:52.176563   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.201857   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.305092   92925 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:07:52.308224   92925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:07:52.308251   92925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:07:52.308263   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:07:52.308316   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:07:52.308413   92925 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:07:52.308423   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 19:07:52.308523   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:07:52.315982   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:07:52.333023   92925 start.go:296] duration metric: took 156.543018ms for postStartSetup
	I1213 19:07:52.333100   92925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:07:52.333150   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.353818   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.454237   92925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:07:52.459167   92925 fix.go:56] duration metric: took 4.893676995s for fixHost
	I1213 19:07:52.459203   92925 start.go:83] releasing machines lock for "ha-605114", held for 4.893726932s
	I1213 19:07:52.459271   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:07:52.475811   92925 ssh_runner.go:195] Run: cat /version.json
	I1213 19:07:52.475832   92925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:07:52.475868   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.475886   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:07:52.494277   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.499565   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:07:52.694122   92925 ssh_runner.go:195] Run: systemctl --version
	I1213 19:07:52.700676   92925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:07:52.737939   92925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:07:52.742564   92925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:07:52.742632   92925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:07:52.750413   92925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 19:07:52.750438   92925 start.go:496] detecting cgroup driver to use...
	I1213 19:07:52.750469   92925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:07:52.750516   92925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:07:52.765290   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:07:52.779600   92925 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:07:52.779718   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:07:52.795802   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:07:52.809441   92925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:07:52.921383   92925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:07:53.050247   92925 docker.go:234] disabling docker service ...
	I1213 19:07:53.050357   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:07:53.065412   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:07:53.078985   92925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:07:53.197041   92925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:07:53.312016   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:07:53.324873   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:07:53.338465   92925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:07:53.338566   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.348165   92925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:07:53.348244   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.357334   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.366113   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.375030   92925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:07:53.383092   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.392159   92925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.400500   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:07:53.409475   92925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:07:53.416937   92925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:07:53.424427   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:07:53.551020   92925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:07:53.724377   92925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:07:53.724453   92925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:07:53.728412   92925 start.go:564] Will wait 60s for crictl version
	I1213 19:07:53.728528   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:07:53.732393   92925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:07:53.759934   92925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:07:53.760022   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:07:53.792422   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:07:53.826233   92925 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 19:07:53.829188   92925 cli_runner.go:164] Run: docker network inspect ha-605114 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:07:53.845641   92925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:07:53.849708   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:07:53.860398   92925 kubeadm.go:884] updating cluster {Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:07:53.860545   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:53.860602   92925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:07:53.896899   92925 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:07:53.896925   92925 crio.go:433] Images already preloaded, skipping extraction
	I1213 19:07:53.896980   92925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:07:53.927660   92925 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:07:53.927686   92925 cache_images.go:86] Images are preloaded, skipping loading
	I1213 19:07:53.927694   92925 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 19:07:53.927835   92925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-605114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:07:53.927943   92925 ssh_runner.go:195] Run: crio config
	I1213 19:07:53.983293   92925 cni.go:84] Creating CNI manager for ""
	I1213 19:07:53.983320   92925 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1213 19:07:53.983344   92925 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 19:07:53.983367   92925 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-605114 NodeName:ha-605114 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:07:53.983512   92925 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-605114"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:07:53.983533   92925 kube-vip.go:115] generating kube-vip config ...
	I1213 19:07:53.983586   92925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1213 19:07:53.998146   92925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:07:53.998359   92925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 19:07:53.998456   92925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 19:07:54.007466   92925 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:07:54.007601   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1213 19:07:54.016257   92925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1213 19:07:54.030166   92925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:07:54.043943   92925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1213 19:07:54.057568   92925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1213 19:07:54.070913   92925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1213 19:07:54.074912   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:07:54.085321   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:07:54.204815   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:07:54.219656   92925 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114 for IP: 192.168.49.2
	I1213 19:07:54.219678   92925 certs.go:195] generating shared ca certs ...
	I1213 19:07:54.219703   92925 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.219837   92925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:07:54.219890   92925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:07:54.219904   92925 certs.go:257] generating profile certs ...
	I1213 19:07:54.219983   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key
	I1213 19:07:54.220016   92925 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc
	I1213 19:07:54.220035   92925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1213 19:07:54.524208   92925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc ...
	I1213 19:07:54.524279   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc: {Name:mk2a78acb3455aba2154553b94cc00acb06ef2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.524506   92925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc ...
	I1213 19:07:54.524551   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc: {Name:mk04e3ed8a0db9ab16dbffd5c3b9073d491094e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:54.524690   92925 certs.go:382] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt.6ef1fccc -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt
	I1213 19:07:54.524872   92925 certs.go:386] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.6ef1fccc -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key
	I1213 19:07:54.525075   92925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key
	I1213 19:07:54.525118   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 19:07:54.525152   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 19:07:54.525194   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 19:07:54.525228   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 19:07:54.525260   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 19:07:54.525307   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 19:07:54.525343   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 19:07:54.525371   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 19:07:54.525461   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:07:54.525519   92925 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:07:54.525567   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:07:54.525619   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:07:54.525684   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:07:54.525769   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:07:54.525903   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:07:54.525966   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.526009   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.526041   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.526676   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:07:54.547219   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:07:54.566530   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:07:54.584290   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:07:54.601920   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 19:07:54.619619   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:07:54.637359   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:07:54.654838   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:07:54.674423   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:07:54.692475   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:07:54.711269   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:07:54.730584   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:07:54.744548   92925 ssh_runner.go:195] Run: openssl version
	I1213 19:07:54.750950   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.759097   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:07:54.766678   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.770469   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.770573   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:07:54.811925   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:07:54.820248   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.829596   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:07:54.843944   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.848466   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.848527   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:07:54.910394   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:07:54.922018   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.934942   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:07:54.943147   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.953686   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:07:54.953799   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:07:55.020871   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:07:55.034570   92925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:07:55.045312   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 19:07:55.146347   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 19:07:55.197938   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 19:07:55.240888   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 19:07:55.293579   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 19:07:55.349397   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 19:07:55.405749   92925 kubeadm.go:401] StartCluster: {Name:ha-605114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:07:55.405941   92925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:07:55.406039   92925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:07:55.476432   92925 cri.go:89] found id: "23b44f60db0dc9ad888430163cce4adc2cef45e4fff10aded1fd37e36e5d5955"
	I1213 19:07:55.476492   92925 cri.go:89] found id: "9a81ddd488bb7e9ca9d20cc8af4e9414463f3bf2bd40edd26c2e9395f731a3ec"
	I1213 19:07:55.476519   92925 cri.go:89] found id: "ee202abc8dba3b97ac56d7c3063ce4fae0734134ba47b9d6070588c897f7baf0"
	I1213 19:07:55.476536   92925 cri.go:89] found id: "3c729bb1538bfb45bc9b5542f5524916c96b118344d2be8a42e58a0bc6d4cb0d"
	I1213 19:07:55.476570   92925 cri.go:89] found id: "2b3744a5aa7a90a9d9036f0de528d8ed7e951f80254fa43fd57f666e0a6ccc86"
	I1213 19:07:55.476591   92925 cri.go:89] found id: ""
	I1213 19:07:55.476674   92925 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 19:07:55.502827   92925 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T19:07:55Z" level=error msg="open /run/runc: no such file or directory"
	I1213 19:07:55.502965   92925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:07:55.514772   92925 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 19:07:55.514841   92925 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 19:07:55.514932   92925 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 19:07:55.530907   92925 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:07:55.531414   92925 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-605114" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:55.531569   92925 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-2686/kubeconfig needs updating (will repair): [kubeconfig missing "ha-605114" cluster setting kubeconfig missing "ha-605114" context setting]
	I1213 19:07:55.531908   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.532529   92925 kapi.go:59] client config for ha-605114: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 19:07:55.533545   92925 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 19:07:55.533623   92925 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 19:07:55.533709   92925 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 19:07:55.533743   92925 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 19:07:55.533762   92925 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 19:07:55.533784   92925 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 19:07:55.534156   92925 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 19:07:55.550155   92925 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 19:07:55.550227   92925 kubeadm.go:602] duration metric: took 35.349185ms to restartPrimaryControlPlane
	I1213 19:07:55.550251   92925 kubeadm.go:403] duration metric: took 144.511847ms to StartCluster
	I1213 19:07:55.550281   92925 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.550405   92925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:07:55.551146   92925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:07:55.551412   92925 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:07:55.551467   92925 start.go:242] waiting for startup goroutines ...
	I1213 19:07:55.551494   92925 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 19:07:55.552092   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:55.557393   92925 out.go:179] * Enabled addons: 
	I1213 19:07:55.560282   92925 addons.go:530] duration metric: took 8.786078ms for enable addons: enabled=[]
	I1213 19:07:55.560370   92925 start.go:247] waiting for cluster config update ...
	I1213 19:07:55.560416   92925 start.go:256] writing updated cluster config ...
	I1213 19:07:55.563604   92925 out.go:203] 
	I1213 19:07:55.566673   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:55.566871   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:55.570151   92925 out.go:179] * Starting "ha-605114-m02" control-plane node in "ha-605114" cluster
	I1213 19:07:55.572987   92925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:07:55.575841   92925 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:07:55.578800   92925 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:07:55.578823   92925 cache.go:65] Caching tarball of preloaded images
	I1213 19:07:55.578933   92925 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:07:55.578943   92925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 19:07:55.579063   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:55.579269   92925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:07:55.599207   92925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:07:55.599233   92925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:07:55.599247   92925 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:07:55.599269   92925 start.go:360] acquireMachinesLock for ha-605114-m02: {Name:mk43db0c2b2ac44e0e8dc9a68aa6922f0bb2fccb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:07:55.599325   92925 start.go:364] duration metric: took 36.989µs to acquireMachinesLock for "ha-605114-m02"
	I1213 19:07:55.599348   92925 start.go:96] Skipping create...Using existing machine configuration
	I1213 19:07:55.599358   92925 fix.go:54] fixHost starting: m02
	I1213 19:07:55.599613   92925 cli_runner.go:164] Run: docker container inspect ha-605114-m02 --format={{.State.Status}}
	I1213 19:07:55.630999   92925 fix.go:112] recreateIfNeeded on ha-605114-m02: state=Stopped err=<nil>
	W1213 19:07:55.631030   92925 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 19:07:55.634239   92925 out.go:252] * Restarting existing docker container for "ha-605114-m02" ...
	I1213 19:07:55.634323   92925 cli_runner.go:164] Run: docker start ha-605114-m02
	I1213 19:07:56.013613   92925 cli_runner.go:164] Run: docker container inspect ha-605114-m02 --format={{.State.Status}}
	I1213 19:07:56.043229   92925 kic.go:430] container "ha-605114-m02" state is running.
	I1213 19:07:56.043952   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:07:56.072863   92925 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/config.json ...
	I1213 19:07:56.073198   92925 machine.go:94] provisionDockerMachine start ...
	I1213 19:07:56.073260   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:56.107315   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:56.107694   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:56.107711   92925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:07:56.108441   92925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 19:07:59.320519   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114-m02
	
	I1213 19:07:59.320540   92925 ubuntu.go:182] provisioning hostname "ha-605114-m02"
	I1213 19:07:59.320600   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.354148   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:59.354465   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:59.354476   92925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-605114-m02 && echo "ha-605114-m02" | sudo tee /etc/hostname
	I1213 19:07:59.560753   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-605114-m02
	
	I1213 19:07:59.560835   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.590681   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:07:59.590982   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:07:59.590997   92925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-605114-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-605114-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-605114-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:07:59.777428   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:07:59.777502   92925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:07:59.777532   92925 ubuntu.go:190] setting up certificates
	I1213 19:07:59.777573   92925 provision.go:84] configureAuth start
	I1213 19:07:59.777669   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:07:59.806547   92925 provision.go:143] copyHostCerts
	I1213 19:07:59.806589   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:59.806621   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:07:59.806628   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:07:59.806709   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:07:59.806788   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:59.806805   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:07:59.806810   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:07:59.806854   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:07:59.806898   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:59.806916   92925 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:07:59.806920   92925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:07:59.806944   92925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:07:59.806989   92925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.ha-605114-m02 san=[127.0.0.1 192.168.49.3 ha-605114-m02 localhost minikube]
	I1213 19:07:59.961185   92925 provision.go:177] copyRemoteCerts
	I1213 19:07:59.961261   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:07:59.961306   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:07:59.986810   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:00.131955   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 19:08:00.132032   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:08:00.173539   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 19:08:00.173623   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 19:08:00.207894   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 19:08:00.207965   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 19:08:00.244666   92925 provision.go:87] duration metric: took 467.054938ms to configureAuth
	I1213 19:08:00.244712   92925 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:08:00.245918   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:08:00.246082   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:00.327171   92925 main.go:143] libmachine: Using SSH client type: native
	I1213 19:08:00.327492   92925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1213 19:08:00.327508   92925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:08:01.970074   92925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:08:01.970150   92925 machine.go:97] duration metric: took 5.896940025s to provisionDockerMachine
	I1213 19:08:01.970177   92925 start.go:293] postStartSetup for "ha-605114-m02" (driver="docker")
	I1213 19:08:01.970221   92925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:08:01.970316   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:08:01.970411   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.009089   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.129494   92925 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:08:02.136549   92925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:08:02.136573   92925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:08:02.136585   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:08:02.136646   92925 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:08:02.136728   92925 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:08:02.136734   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /etc/ssl/certs/46372.pem
	I1213 19:08:02.136842   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:08:02.171248   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:08:02.216469   92925 start.go:296] duration metric: took 246.261152ms for postStartSetup
	I1213 19:08:02.216625   92925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:08:02.216685   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.262639   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.374718   92925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:08:02.380084   92925 fix.go:56] duration metric: took 6.780718951s for fixHost
	I1213 19:08:02.380108   92925 start.go:83] releasing machines lock for "ha-605114-m02", held for 6.780770726s
	I1213 19:08:02.380176   92925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m02
	I1213 19:08:02.401071   92925 out.go:179] * Found network options:
	I1213 19:08:02.404164   92925 out.go:179]   - NO_PROXY=192.168.49.2
	W1213 19:08:02.407079   92925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1213 19:08:02.407127   92925 proxy.go:120] fail to check proxy env: Error ip not in block
	I1213 19:08:02.407198   92925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:08:02.407241   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.407257   92925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:08:02.407313   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m02
	I1213 19:08:02.441677   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.462715   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m02/id_rsa Username:docker}
	I1213 19:08:02.700903   92925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:08:02.788606   92925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:08:02.788680   92925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:08:02.802406   92925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 19:08:02.802471   92925 start.go:496] detecting cgroup driver to use...
	I1213 19:08:02.802520   92925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:08:02.802599   92925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:08:02.821557   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:08:02.843971   92925 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:08:02.844081   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:08:02.866953   92925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:08:02.884909   92925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:08:03.137948   92925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:08:03.363884   92925 docker.go:234] disabling docker service ...
	I1213 19:08:03.363990   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:08:03.388880   92925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:08:03.405597   92925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:08:03.645933   92925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:08:03.919704   92925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:08:03.941774   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:08:03.972913   92925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:08:03.973103   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:03.988083   92925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:08:03.988256   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.019667   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.031645   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.049709   92925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:08:04.086713   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.109181   92925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.119963   92925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:08:04.154436   92925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:08:04.170086   92925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:08:04.191001   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:08:04.484381   92925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:09:34.781930   92925 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.297515083s)
	I1213 19:09:34.781956   92925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:09:34.782006   92925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:09:34.785743   92925 start.go:564] Will wait 60s for crictl version
	I1213 19:09:34.785812   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:09:34.789353   92925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:09:34.818524   92925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:09:34.818612   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:09:34.852441   92925 ssh_runner.go:195] Run: crio --version
	I1213 19:09:34.887257   92925 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 19:09:34.890293   92925 out.go:179]   - env NO_PROXY=192.168.49.2
	I1213 19:09:34.893426   92925 cli_runner.go:164] Run: docker network inspect ha-605114 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:09:34.911684   92925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:09:34.915601   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:09:34.925402   92925 mustload.go:66] Loading cluster: ha-605114
	I1213 19:09:34.925637   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:09:34.925900   92925 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:09:34.944458   92925 host.go:66] Checking if "ha-605114" exists ...
	I1213 19:09:34.944731   92925 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114 for IP: 192.168.49.3
	I1213 19:09:34.944745   92925 certs.go:195] generating shared ca certs ...
	I1213 19:09:34.944760   92925 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:09:34.944889   92925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:09:34.944944   92925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:09:34.944957   92925 certs.go:257] generating profile certs ...
	I1213 19:09:34.945069   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key
	I1213 19:09:34.945157   92925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key.29c07aea
	I1213 19:09:34.945202   92925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key
	I1213 19:09:34.945215   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 19:09:34.945230   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 19:09:34.945254   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 19:09:34.945266   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 19:09:34.945281   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 19:09:34.945294   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 19:09:34.945309   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 19:09:34.945328   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 19:09:34.945383   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:09:34.945424   92925 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:09:34.945446   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:09:34.945479   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:09:34.945508   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:09:34.945538   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:09:34.945583   92925 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:09:34.945616   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:34.945632   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem -> /usr/share/ca-certificates/4637.pem
	I1213 19:09:34.945649   92925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> /usr/share/ca-certificates/46372.pem
	I1213 19:09:34.945719   92925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:09:34.963328   92925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:09:35.065324   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1213 19:09:35.069081   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1213 19:09:35.077819   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1213 19:09:35.081455   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1213 19:09:35.089763   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1213 19:09:35.093612   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1213 19:09:35.102260   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1213 19:09:35.106728   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1213 19:09:35.115519   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1213 19:09:35.119196   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1213 19:09:35.129001   92925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1213 19:09:35.132624   92925 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1213 19:09:35.141653   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:09:35.161897   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:09:35.182131   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:09:35.202060   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:09:35.222310   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 19:09:35.243497   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:09:35.265517   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:09:35.284987   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:09:35.302971   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:09:35.320388   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:09:35.338865   92925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:09:35.356332   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1213 19:09:35.369616   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1213 19:09:35.383108   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1213 19:09:35.396928   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1213 19:09:35.410529   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1213 19:09:35.423162   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1213 19:09:35.436667   92925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1213 19:09:35.450451   92925 ssh_runner.go:195] Run: openssl version
	I1213 19:09:35.457142   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.464516   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:09:35.472169   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.475920   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.475984   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:09:35.516956   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:09:35.524426   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.532136   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:09:35.539767   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.543798   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.543906   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:09:35.586837   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:09:35.594791   92925 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.602550   92925 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:09:35.610984   92925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.614895   92925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.614973   92925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:09:35.661484   92925 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:09:35.668847   92925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:09:35.672924   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 19:09:35.714926   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 19:09:35.757278   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 19:09:35.798060   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 19:09:35.840340   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 19:09:35.883228   92925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 19:09:35.926498   92925 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1213 19:09:35.926597   92925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-605114-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-605114 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:09:35.926628   92925 kube-vip.go:115] generating kube-vip config ...
	I1213 19:09:35.926680   92925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1213 19:09:35.939407   92925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:09:35.939464   92925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1213 19:09:35.939538   92925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 19:09:35.948342   92925 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:09:35.948446   92925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1213 19:09:35.956523   92925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 19:09:35.970227   92925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:09:35.985384   92925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1213 19:09:36.004385   92925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1213 19:09:36.008483   92925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:09:36.019218   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:09:36.155982   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:09:36.170330   92925 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:09:36.170793   92925 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:09:36.174251   92925 out.go:179] * Verifying Kubernetes components...
	I1213 19:09:36.177213   92925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:09:36.319740   92925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:09:36.334811   92925 kapi.go:59] client config for ha-605114: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/ha-605114/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1213 19:09:36.334886   92925 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1213 19:09:36.335095   92925 node_ready.go:35] waiting up to 6m0s for node "ha-605114-m02" to be "Ready" ...
	I1213 19:09:39.281934   92925 node_ready.go:49] node "ha-605114-m02" is "Ready"
	I1213 19:09:39.281962   92925 node_ready.go:38] duration metric: took 2.946847766s for node "ha-605114-m02" to be "Ready" ...
	I1213 19:09:39.281975   92925 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:09:39.282034   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:39.782149   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:40.282856   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:40.782144   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:41.282958   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:41.782581   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:42.282264   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:42.782257   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:43.283132   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:43.782112   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:44.282168   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:44.782088   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:45.282593   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:45.782122   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:46.282927   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:46.782182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:47.282980   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:47.783112   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:48.282633   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:48.782211   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:49.282732   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:49.782187   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:50.282735   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:50.782142   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:51.282519   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:51.782152   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:52.282197   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:52.782636   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:53.282768   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:53.782116   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:54.282300   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:54.782182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:55.282883   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:55.783092   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:56.282203   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:56.783098   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:57.282717   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:57.782189   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:58.282252   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:58.782909   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:59.282100   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:09:59.782310   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:00.289145   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:00.782212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:01.282192   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:01.782760   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:02.282108   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:02.782972   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:03.282353   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:03.782328   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:04.282366   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:04.782174   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:05.282835   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:05.782488   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:06.283036   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:06.782436   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:07.282292   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:07.782212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:08.283033   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:08.783070   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:09.282897   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:09.782668   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:10.282222   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:10.782267   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:11.282198   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:11.782837   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:12.282212   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:12.783009   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:13.282406   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:13.782556   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:14.283140   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:14.782783   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:15.283077   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:15.783150   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:16.282934   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:16.783092   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:17.282186   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:17.782253   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:18.282771   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:18.782339   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:19.282255   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:19.782254   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:20.282346   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:20.782992   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:21.282270   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:21.782169   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:22.282176   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:22.782681   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:23.282402   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:23.783116   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:24.282118   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:24.782962   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:25.283031   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:25.783024   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:26.283105   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:26.782110   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:27.282833   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:27.782332   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:28.282978   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:28.782284   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:29.283095   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:29.782866   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:30.282438   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:30.782580   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:31.282697   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:31.783148   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:32.283119   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:32.782971   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:33.282108   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:33.783088   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:34.283075   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:34.782667   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:35.282868   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:35.782514   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:36.282200   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:36.282308   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:36.311092   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:36.311117   92925 cri.go:89] found id: ""
	I1213 19:10:36.311125   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:36.311180   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.314888   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:36.314970   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:36.342553   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:36.342573   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:36.342578   92925 cri.go:89] found id: ""
	I1213 19:10:36.342586   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:36.342655   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.346486   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.349986   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:36.350061   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:36.375198   92925 cri.go:89] found id: ""
	I1213 19:10:36.375262   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.375275   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:36.375281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:36.375350   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:36.406767   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:36.406789   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:36.406794   92925 cri.go:89] found id: ""
	I1213 19:10:36.406801   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:36.406857   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.410743   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.414390   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:36.414490   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:36.441810   92925 cri.go:89] found id: ""
	I1213 19:10:36.441833   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.441841   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:36.441848   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:36.441911   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:36.468354   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:36.468374   92925 cri.go:89] found id: ""
	I1213 19:10:36.468382   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:36.468436   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:36.472238   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:36.472316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:36.500356   92925 cri.go:89] found id: ""
	I1213 19:10:36.500383   92925 logs.go:282] 0 containers: []
	W1213 19:10:36.500394   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:36.500404   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:36.500414   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:36.593811   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:36.593845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:36.607625   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:36.607656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:37.031907   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:37.023726    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.024402    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.025999    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.026604    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.028296    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:37.023726    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.024402    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.025999    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.026604    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:37.028296    1511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:37.031933   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:37.031948   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:37.057050   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:37.057079   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:37.097228   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:37.097262   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:37.148963   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:37.149014   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:37.217399   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:37.217436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:37.248174   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:37.248203   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:37.274722   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:37.274748   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:37.355342   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:37.355379   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:39.885413   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:39.896181   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:39.896250   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:39.928054   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:39.928078   92925 cri.go:89] found id: ""
	I1213 19:10:39.928087   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:39.928142   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.932690   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:39.932760   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:39.962089   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:39.962110   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:39.962114   92925 cri.go:89] found id: ""
	I1213 19:10:39.962122   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:39.962178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.966008   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:39.970141   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:39.970211   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:40.031915   92925 cri.go:89] found id: ""
	I1213 19:10:40.031938   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.031947   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:40.031954   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:40.032022   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:40.075124   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:40.075145   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:40.075150   92925 cri.go:89] found id: ""
	I1213 19:10:40.075157   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:40.075216   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.079588   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.083956   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:40.084077   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:40.120592   92925 cri.go:89] found id: ""
	I1213 19:10:40.120623   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.120633   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:40.120640   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:40.120707   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:40.162573   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:40.162599   92925 cri.go:89] found id: ""
	I1213 19:10:40.162620   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:40.162692   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:40.167731   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:40.167810   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:40.197646   92925 cri.go:89] found id: ""
	I1213 19:10:40.197681   92925 logs.go:282] 0 containers: []
	W1213 19:10:40.197692   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:40.197701   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:40.197714   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:40.279428   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:40.270096    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.270945    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.271678    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.273521    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.274072    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:40.270096    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.270945    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.271678    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.273521    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:40.274072    1639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:40.279462   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:40.279476   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:40.317833   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:40.317867   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:40.365303   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:40.365339   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:40.391972   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:40.392006   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:40.467785   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:40.467824   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:40.499555   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:40.499587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:40.601537   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:40.601571   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:40.614326   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:40.614357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:40.643794   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:40.643823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:40.696205   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:40.696242   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.224045   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:43.234786   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:43.234854   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:43.262459   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:43.262481   92925 cri.go:89] found id: ""
	I1213 19:10:43.262489   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:43.262544   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.267289   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:43.267362   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:43.294825   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:43.294846   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:43.294858   92925 cri.go:89] found id: ""
	I1213 19:10:43.294873   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:43.294931   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.298717   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.302500   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:43.302576   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:43.328978   92925 cri.go:89] found id: ""
	I1213 19:10:43.329001   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.329048   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:43.329055   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:43.329115   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:43.358394   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:43.358419   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.358426   92925 cri.go:89] found id: ""
	I1213 19:10:43.358434   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:43.358544   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.363176   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.366906   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:43.366996   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:43.396556   92925 cri.go:89] found id: ""
	I1213 19:10:43.396583   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.396592   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:43.396598   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:43.396657   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:43.422776   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:43.422803   92925 cri.go:89] found id: ""
	I1213 19:10:43.422813   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:43.422886   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:43.426512   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:43.426579   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:43.452942   92925 cri.go:89] found id: ""
	I1213 19:10:43.452966   92925 logs.go:282] 0 containers: []
	W1213 19:10:43.452975   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:43.452984   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:43.452996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:43.479637   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:43.479708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:43.492492   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:43.492521   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:43.555898   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:43.555930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:43.583059   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:43.583089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:43.665528   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:43.665562   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:43.713108   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:43.713136   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:43.817894   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:43.817930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:43.900953   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:43.892916    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.893797    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895356    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895650    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.897247    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:43.892916    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.893797    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895356    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.895650    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:43.897247    1822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:43.900978   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:43.900992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:43.928040   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:43.928067   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:43.989295   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:43.989349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:46.551759   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:46.562922   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:46.562999   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:46.590576   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:46.590607   92925 cri.go:89] found id: ""
	I1213 19:10:46.590615   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:46.590669   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.594481   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:46.594557   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:46.619444   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:46.619466   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:46.619472   92925 cri.go:89] found id: ""
	I1213 19:10:46.619480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:46.619562   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.623350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.626652   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:46.626726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:46.655019   92925 cri.go:89] found id: ""
	I1213 19:10:46.655045   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.655055   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:46.655061   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:46.655119   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:46.685081   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:46.685108   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:46.685113   92925 cri.go:89] found id: ""
	I1213 19:10:46.685121   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:46.685178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.689664   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.693381   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:46.693455   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:46.719871   92925 cri.go:89] found id: ""
	I1213 19:10:46.719897   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.719906   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:46.719914   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:46.719979   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:46.747153   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:46.747176   92925 cri.go:89] found id: ""
	I1213 19:10:46.747184   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:46.747239   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:46.751093   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:46.751198   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:46.777729   92925 cri.go:89] found id: ""
	I1213 19:10:46.777800   92925 logs.go:282] 0 containers: []
	W1213 19:10:46.777816   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:46.777827   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:46.777840   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:46.807286   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:46.807315   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:46.900226   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:46.900266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:46.913850   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:46.913877   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:46.995097   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:46.986432    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.987537    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.988185    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.989944    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.990430    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:46.986432    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.987537    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.988185    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.989944    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:46.990430    1930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:46.995121   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:46.995146   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:47.020980   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:47.021038   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:47.062312   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:47.062348   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:47.143840   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:47.143916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:47.176420   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:47.176455   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:47.221958   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:47.222003   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:47.276308   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:47.276349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:49.804769   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:49.815535   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:49.815609   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:49.841153   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:49.841227   92925 cri.go:89] found id: ""
	I1213 19:10:49.841258   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:49.841341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.844798   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:49.844903   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:49.872086   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:49.872111   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:49.872117   92925 cri.go:89] found id: ""
	I1213 19:10:49.872124   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:49.872178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.875975   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.879817   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:49.879892   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:49.918961   92925 cri.go:89] found id: ""
	I1213 19:10:49.918987   92925 logs.go:282] 0 containers: []
	W1213 19:10:49.918996   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:49.919002   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:49.919059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:49.959969   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:49.959994   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:49.959999   92925 cri.go:89] found id: ""
	I1213 19:10:49.960007   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:49.960063   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.964635   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:49.969140   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:49.969208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:50.006023   92925 cri.go:89] found id: ""
	I1213 19:10:50.006049   92925 logs.go:282] 0 containers: []
	W1213 19:10:50.006058   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:50.006064   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:50.006143   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:50.040945   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:50.040965   92925 cri.go:89] found id: ""
	I1213 19:10:50.040973   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:50.041060   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:50.044991   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:50.045100   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:50.073352   92925 cri.go:89] found id: ""
	I1213 19:10:50.073383   92925 logs.go:282] 0 containers: []
	W1213 19:10:50.073409   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:50.073420   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:50.073437   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:50.092169   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:50.092219   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:50.167681   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:50.167719   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:50.220989   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:50.221028   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:50.252059   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:50.252091   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:50.358508   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:50.358555   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:50.434424   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:50.426219    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.426850    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.428449    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.429020    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.430880    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:50.426219    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.426850    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.428449    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.429020    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:50.430880    2082 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:50.434452   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:50.434467   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:50.458963   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:50.458992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:50.516376   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:50.516410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:50.543978   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:50.544009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:50.619429   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:50.619468   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:53.153421   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:53.163979   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:53.164048   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:53.191198   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:53.191259   92925 cri.go:89] found id: ""
	I1213 19:10:53.191291   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:53.191363   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.195132   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:53.195204   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:53.222253   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:53.222276   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:53.222280   92925 cri.go:89] found id: ""
	I1213 19:10:53.222287   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:53.222370   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.226176   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.229762   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:53.229878   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:53.260062   92925 cri.go:89] found id: ""
	I1213 19:10:53.260088   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.260096   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:53.260103   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:53.260159   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:53.289940   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:53.290005   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:53.290024   92925 cri.go:89] found id: ""
	I1213 19:10:53.290037   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:53.290106   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.293745   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.297116   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:53.297199   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:53.324233   92925 cri.go:89] found id: ""
	I1213 19:10:53.324259   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.324268   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:53.324274   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:53.324329   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:53.355230   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:53.355252   92925 cri.go:89] found id: ""
	I1213 19:10:53.355260   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:53.355312   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:53.358865   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:53.358932   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:53.388377   92925 cri.go:89] found id: ""
	I1213 19:10:53.388460   92925 logs.go:282] 0 containers: []
	W1213 19:10:53.388486   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:53.388531   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:53.388561   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:53.482197   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:53.482233   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:53.495635   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:53.495666   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:53.527174   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:53.527201   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:53.568473   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:53.568509   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:53.613038   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:53.613068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:53.666213   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:53.666248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:53.746993   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:53.747031   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:53.777726   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:53.777758   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:53.849162   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:53.840835    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.841725    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.842564    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844081    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844396    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:53.840835    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.841725    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.842564    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844081    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:53.844396    2240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:53.849193   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:53.849207   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:53.879522   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:53.879551   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.408599   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:56.420063   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:56.420130   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:56.446598   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:56.446622   92925 cri.go:89] found id: ""
	I1213 19:10:56.446630   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:56.446691   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.450451   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:56.450519   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:56.477437   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:56.477460   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:56.477465   92925 cri.go:89] found id: ""
	I1213 19:10:56.477472   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:56.477560   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.481341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.484891   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:56.484963   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:56.513437   92925 cri.go:89] found id: ""
	I1213 19:10:56.513459   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.513467   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:56.513473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:56.513531   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:56.542772   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:56.542812   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:56.542818   92925 cri.go:89] found id: ""
	I1213 19:10:56.542845   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:56.542930   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.546773   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.550355   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:56.550430   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:56.577663   92925 cri.go:89] found id: ""
	I1213 19:10:56.577687   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.577695   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:56.577701   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:56.577811   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:56.604755   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.604827   92925 cri.go:89] found id: ""
	I1213 19:10:56.604849   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:56.604945   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:56.608549   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:56.608618   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:56.635735   92925 cri.go:89] found id: ""
	I1213 19:10:56.635759   92925 logs.go:282] 0 containers: []
	W1213 19:10:56.635767   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:56.635777   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:56.635789   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:56.729353   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:56.729388   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:10:56.741845   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:10:56.741874   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:10:56.815151   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:10:56.806729    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.807450    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.808916    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.809436    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.811611    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:10:56.806729    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.807450    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.808916    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.809436    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:10:56.811611    2332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:10:56.815178   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:10:56.815193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:56.871711   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:10:56.871748   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:56.904003   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:10:56.904034   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:10:56.941519   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:10:56.941549   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:56.974994   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:10:56.975022   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:57.015259   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:10:57.015290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:57.059492   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:10:57.059527   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:57.085661   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:10:57.085690   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:10:59.675412   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:10:59.686117   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:10:59.686192   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:10:59.710921   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:10:59.710951   92925 cri.go:89] found id: ""
	I1213 19:10:59.710960   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:10:59.711015   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.714894   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:10:59.715008   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:10:59.742170   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:10:59.742193   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:10:59.742199   92925 cri.go:89] found id: ""
	I1213 19:10:59.742206   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:10:59.742261   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.746138   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.750866   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:10:59.750942   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:10:59.777917   92925 cri.go:89] found id: ""
	I1213 19:10:59.777943   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.777951   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:10:59.777957   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:10:59.778015   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:10:59.803883   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:10:59.803903   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:10:59.803908   92925 cri.go:89] found id: ""
	I1213 19:10:59.803916   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:10:59.803971   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.807903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.811388   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:10:59.811453   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:10:59.837952   92925 cri.go:89] found id: ""
	I1213 19:10:59.837977   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.837986   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:10:59.837992   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:10:59.838048   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:10:59.864431   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:10:59.864490   92925 cri.go:89] found id: ""
	I1213 19:10:59.864512   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:10:59.864594   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:10:59.869272   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:10:59.869345   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:10:59.896571   92925 cri.go:89] found id: ""
	I1213 19:10:59.896603   92925 logs.go:282] 0 containers: []
	W1213 19:10:59.896612   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:10:59.896622   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:10:59.896634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:10:59.997222   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:10:59.997313   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:00.122051   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:00.122166   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:00.334228   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:00.323858    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.324625    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326029    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326896    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.328835    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:00.323858    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.324625    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326029    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.326896    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:00.328835    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:00.334270   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:00.334284   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:00.397345   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:00.397381   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:00.460082   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:00.460118   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:00.507030   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:00.507068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:00.561579   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:00.561611   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:00.590319   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:00.590346   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:00.618590   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:00.618617   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:00.700620   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:00.700655   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:03.247538   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:03.260650   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:03.260720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:03.296710   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:03.296736   92925 cri.go:89] found id: ""
	I1213 19:11:03.296744   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:03.296804   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.300974   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:03.301083   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:03.332989   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:03.333019   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:03.333024   92925 cri.go:89] found id: ""
	I1213 19:11:03.333031   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:03.333085   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.337959   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.341569   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:03.341642   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:03.367805   92925 cri.go:89] found id: ""
	I1213 19:11:03.367831   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.367840   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:03.367847   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:03.367910   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:03.396144   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:03.396165   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:03.396170   92925 cri.go:89] found id: ""
	I1213 19:11:03.396177   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:03.396234   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.400643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.404350   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:03.404422   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:03.431472   92925 cri.go:89] found id: ""
	I1213 19:11:03.431498   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.431508   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:03.431520   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:03.431602   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:03.459968   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:03.460034   92925 cri.go:89] found id: ""
	I1213 19:11:03.460058   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:03.460134   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:03.464138   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:03.464230   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:03.491871   92925 cri.go:89] found id: ""
	I1213 19:11:03.491897   92925 logs.go:282] 0 containers: []
	W1213 19:11:03.491906   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:03.491916   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:03.491928   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:03.528376   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:03.528451   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:03.562095   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:03.562124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:03.575381   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:03.575410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:03.602586   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:03.602615   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:03.651880   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:03.651912   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:03.708104   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:03.708142   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:03.736240   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:03.736268   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:03.814277   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:03.814314   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:03.920505   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:03.920542   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:04.025281   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:04.014467    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.015603    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.016913    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.017960    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.019083    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:04.014467    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.015603    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.016913    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.017960    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:04.019083    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:04.025308   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:04.025326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.584492   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:06.595822   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:06.595900   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:06.627891   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:06.627917   92925 cri.go:89] found id: ""
	I1213 19:11:06.627925   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:06.627982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.632107   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:06.632184   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:06.657896   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:06.657921   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.657926   92925 cri.go:89] found id: ""
	I1213 19:11:06.657934   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:06.657989   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.661493   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.665545   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:06.665611   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:06.696673   92925 cri.go:89] found id: ""
	I1213 19:11:06.696748   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.696773   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:06.696792   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:06.696879   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:06.724330   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:06.724355   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:06.724360   92925 cri.go:89] found id: ""
	I1213 19:11:06.724368   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:06.724422   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.728040   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.731506   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:06.731610   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:06.756515   92925 cri.go:89] found id: ""
	I1213 19:11:06.756578   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.756601   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:06.756622   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:06.756700   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:06.783035   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:06.783094   92925 cri.go:89] found id: ""
	I1213 19:11:06.783117   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:06.783184   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:06.787082   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:06.787158   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:06.813991   92925 cri.go:89] found id: ""
	I1213 19:11:06.814014   92925 logs.go:282] 0 containers: []
	W1213 19:11:06.814022   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:06.814031   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:06.814043   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:06.860023   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:06.860057   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:06.915266   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:06.915303   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:07.005436   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:07.005480   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:07.041558   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:07.041591   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:07.055111   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:07.055140   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:07.085506   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:07.085534   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:07.140042   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:07.140080   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:07.170267   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:07.170300   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:07.197645   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:07.197676   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:07.298125   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:07.298167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:07.368495   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:07.358879    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.359581    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361161    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361458    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.363677    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:07.358879    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.359581    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361161    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.361458    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:07.363677    2811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:09.868760   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:09.879760   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:09.879831   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:09.907241   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:09.907264   92925 cri.go:89] found id: ""
	I1213 19:11:09.907272   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:09.907331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.910883   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:09.910954   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:09.936137   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:09.936156   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:09.936161   92925 cri.go:89] found id: ""
	I1213 19:11:09.936167   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:09.936222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.940048   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:09.951154   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:09.951222   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:09.985435   92925 cri.go:89] found id: ""
	I1213 19:11:09.985520   92925 logs.go:282] 0 containers: []
	W1213 19:11:09.985532   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:09.985540   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:09.985648   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:10.028412   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:10.028487   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:10.028521   92925 cri.go:89] found id: ""
	I1213 19:11:10.028549   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:10.028643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.035436   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.040716   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:10.040834   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:10.070216   92925 cri.go:89] found id: ""
	I1213 19:11:10.070245   92925 logs.go:282] 0 containers: []
	W1213 19:11:10.070255   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:10.070261   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:10.070323   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:10.107151   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:10.107174   92925 cri.go:89] found id: ""
	I1213 19:11:10.107183   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:10.107241   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:10.111700   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:10.111773   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:10.148889   92925 cri.go:89] found id: ""
	I1213 19:11:10.148913   92925 logs.go:282] 0 containers: []
	W1213 19:11:10.148922   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:10.148931   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:10.148946   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:10.183850   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:10.183953   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:10.284535   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:10.284572   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:10.361456   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:10.353378    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.354229    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.355719    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.356209    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.357653    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:10.353378    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.354229    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.355719    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.356209    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:10.357653    2893 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:10.361521   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:10.361543   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:10.401195   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:10.401230   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:10.466771   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:10.466806   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:10.492988   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:10.493041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:10.506114   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:10.506143   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:10.534614   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:10.534643   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:10.589313   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:10.589346   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:10.621617   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:10.621646   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:13.202940   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:13.214007   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:13.214076   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:13.241311   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:13.241334   92925 cri.go:89] found id: ""
	I1213 19:11:13.241342   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:13.241399   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.244857   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:13.244973   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:13.271246   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:13.271272   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:13.271277   92925 cri.go:89] found id: ""
	I1213 19:11:13.271284   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:13.271368   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.275204   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.278868   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:13.278941   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:13.306334   92925 cri.go:89] found id: ""
	I1213 19:11:13.306365   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.306373   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:13.306379   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:13.306440   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:13.332388   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:13.332407   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:13.332412   92925 cri.go:89] found id: ""
	I1213 19:11:13.332419   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:13.332474   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.336618   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.340235   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:13.340305   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:13.366487   92925 cri.go:89] found id: ""
	I1213 19:11:13.366522   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.366531   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:13.366537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:13.366597   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:13.397475   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:13.397496   92925 cri.go:89] found id: ""
	I1213 19:11:13.397504   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:13.397565   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:13.401266   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:13.401377   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:13.430168   92925 cri.go:89] found id: ""
	I1213 19:11:13.430196   92925 logs.go:282] 0 containers: []
	W1213 19:11:13.430205   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:13.430221   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:13.430235   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:13.496086   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:13.486609    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.487472    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489304    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489961    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.491916    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:13.486609    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.487472    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489304    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.489961    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:13.491916    3016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:13.496111   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:13.496124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:13.548378   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:13.548413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:13.601861   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:13.601899   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:13.634165   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:13.634193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:13.662242   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:13.662270   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:13.737810   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:13.737846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:13.770540   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:13.770574   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:13.783830   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:13.783907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:13.810122   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:13.810149   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:13.856452   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:13.856485   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:16.448594   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:16.459829   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:16.459900   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:16.489717   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:16.489737   92925 cri.go:89] found id: ""
	I1213 19:11:16.489745   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:16.489799   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.494205   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:16.494290   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:16.529314   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:16.529336   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:16.529340   92925 cri.go:89] found id: ""
	I1213 19:11:16.529349   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:16.529404   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.533136   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.536814   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:16.536887   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:16.563026   92925 cri.go:89] found id: ""
	I1213 19:11:16.563064   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.563073   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:16.563079   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:16.563139   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:16.594519   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:16.594541   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:16.594546   92925 cri.go:89] found id: ""
	I1213 19:11:16.594554   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:16.594611   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.598288   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.601875   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:16.601946   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:16.628577   92925 cri.go:89] found id: ""
	I1213 19:11:16.628603   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.628612   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:16.628618   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:16.628676   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:16.656978   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:16.657001   92925 cri.go:89] found id: ""
	I1213 19:11:16.657039   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:16.657095   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:16.661124   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:16.661236   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:16.695697   92925 cri.go:89] found id: ""
	I1213 19:11:16.695731   92925 logs.go:282] 0 containers: []
	W1213 19:11:16.695739   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:16.695748   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:16.695760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:16.766672   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:16.757776    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.758599    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760229    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760563    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.762386    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:16.757776    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.758599    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760229    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.760563    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:16.762386    3152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:16.766696   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:16.766709   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:16.808187   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:16.808237   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:16.850027   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:16.850062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:16.906135   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:16.906174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:16.935630   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:16.935661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:16.963433   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:16.963463   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:17.045818   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:17.045852   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:17.079053   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:17.079080   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:17.186217   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:17.186251   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:17.198725   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:17.198760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:19.727394   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:19.738364   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:19.738431   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:19.768160   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:19.768183   92925 cri.go:89] found id: ""
	I1213 19:11:19.768196   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:19.768252   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.772004   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:19.772128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:19.799342   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:19.799368   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:19.799374   92925 cri.go:89] found id: ""
	I1213 19:11:19.799382   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:19.799466   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.803455   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.807247   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:19.807340   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:19.835979   92925 cri.go:89] found id: ""
	I1213 19:11:19.836005   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.836014   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:19.836021   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:19.836081   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:19.864302   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:19.864325   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:19.864331   92925 cri.go:89] found id: ""
	I1213 19:11:19.864338   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:19.864397   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.868104   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.871725   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:19.871812   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:19.899890   92925 cri.go:89] found id: ""
	I1213 19:11:19.899919   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.899937   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:19.899944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:19.900012   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:19.927600   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:19.927624   92925 cri.go:89] found id: ""
	I1213 19:11:19.927632   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:19.927685   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:19.931424   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:19.931509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:19.961424   92925 cri.go:89] found id: ""
	I1213 19:11:19.961454   92925 logs.go:282] 0 containers: []
	W1213 19:11:19.961469   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:19.961479   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:19.961492   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:20.002155   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:20.002284   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:20.082123   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:20.071968    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.072791    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.075159    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.076013    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.077851    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:20.071968    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.072791    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.075159    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.076013    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:20.077851    3295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:20.082148   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:20.082162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:20.127578   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:20.127614   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:20.174673   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:20.174713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:20.204713   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:20.204791   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:20.282989   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:20.283026   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:20.327361   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:20.327436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:20.427993   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:20.428032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:20.442295   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:20.442326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:20.471477   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:20.471510   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.025659   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:23.036724   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:23.036796   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:23.064245   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:23.064269   92925 cri.go:89] found id: ""
	I1213 19:11:23.064281   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:23.064341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.068194   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:23.068269   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:23.097592   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:23.097616   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:23.097622   92925 cri.go:89] found id: ""
	I1213 19:11:23.097629   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:23.097692   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.104525   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.110378   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:23.110459   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:23.144932   92925 cri.go:89] found id: ""
	I1213 19:11:23.144958   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.144966   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:23.144972   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:23.145063   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:23.177104   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.177129   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:23.177134   92925 cri.go:89] found id: ""
	I1213 19:11:23.177142   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:23.177197   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.181178   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.185904   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:23.185988   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:23.213662   92925 cri.go:89] found id: ""
	I1213 19:11:23.213740   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.213765   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:23.213784   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:23.213891   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:23.244233   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:23.244298   92925 cri.go:89] found id: ""
	I1213 19:11:23.244322   92925 logs.go:282] 1 containers: [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:23.244413   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:23.248148   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:23.248228   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:23.276740   92925 cri.go:89] found id: ""
	I1213 19:11:23.276765   92925 logs.go:282] 0 containers: []
	W1213 19:11:23.276773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:23.276784   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:23.276796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:23.336420   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:23.336453   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:23.368543   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:23.368572   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:23.450730   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:23.450772   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:23.483510   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:23.483550   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:23.628675   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:23.619033    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.620672    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.621438    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623126    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623775    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:23.619033    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.620672    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.621438    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623126    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:23.623775    3457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:23.628699   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:23.628713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:23.665846   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:23.665882   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:23.713922   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:23.713959   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:23.752354   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:23.752384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:23.858109   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:23.858150   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:23.871373   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:23.871404   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.419535   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:26.430634   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:26.430705   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:26.458628   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:26.458650   92925 cri.go:89] found id: ""
	I1213 19:11:26.458661   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:26.458716   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.462422   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:26.462495   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:26.490349   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.490389   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:26.490394   92925 cri.go:89] found id: ""
	I1213 19:11:26.490401   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:26.490468   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.494405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.498636   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:26.498716   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:26.528607   92925 cri.go:89] found id: ""
	I1213 19:11:26.528637   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.528646   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:26.528653   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:26.528722   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:26.558710   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:26.558733   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:26.558741   92925 cri.go:89] found id: ""
	I1213 19:11:26.558748   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:26.558825   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.562803   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.566707   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:26.566808   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:26.596729   92925 cri.go:89] found id: ""
	I1213 19:11:26.596754   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.596763   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:26.596769   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:26.596826   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:26.624054   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:26.624077   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:26.624083   92925 cri.go:89] found id: ""
	I1213 19:11:26.624090   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:26.624167   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.628449   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:26.632716   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:26.632822   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:26.659170   92925 cri.go:89] found id: ""
	I1213 19:11:26.659195   92925 logs.go:282] 0 containers: []
	W1213 19:11:26.659204   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:26.659213   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:26.659226   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:26.694272   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:26.694300   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:26.720924   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:26.720959   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:26.751980   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:26.752009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:26.824509   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:26.824547   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:26.855705   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:26.855733   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:26.867403   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:26.867431   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:26.906787   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:26.906823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:26.951319   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:26.951351   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:27.006541   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:27.006579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:27.033554   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:27.033583   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:27.135230   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:27.135266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:27.210106   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:27.201700    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.202413    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.203893    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.204311    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.205969    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:27.201700    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.202413    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.203893    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.204311    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:27.205969    3649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:29.711829   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:29.723531   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:29.723601   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:29.753961   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:29.753984   92925 cri.go:89] found id: ""
	I1213 19:11:29.753992   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:29.754050   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.757806   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:29.757873   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:29.783149   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:29.783181   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:29.783186   92925 cri.go:89] found id: ""
	I1213 19:11:29.783194   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:29.783263   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.787082   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.790979   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:29.791109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:29.817959   92925 cri.go:89] found id: ""
	I1213 19:11:29.817985   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.817994   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:29.818000   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:29.818060   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:29.846235   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:29.846257   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:29.846262   92925 cri.go:89] found id: ""
	I1213 19:11:29.846270   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:29.846351   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.849953   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.853572   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:29.853692   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:29.879800   92925 cri.go:89] found id: ""
	I1213 19:11:29.879834   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.879843   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:29.879850   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:29.879915   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:29.907082   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:29.907116   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:29.907121   92925 cri.go:89] found id: ""
	I1213 19:11:29.907128   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:29.907192   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.910914   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:29.914566   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:29.914651   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:29.939124   92925 cri.go:89] found id: ""
	I1213 19:11:29.939149   92925 logs.go:282] 0 containers: []
	W1213 19:11:29.939158   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:29.939168   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:29.939205   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:29.981605   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:29.981639   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:30.089079   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:30.089116   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:30.156090   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:30.156124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:30.186549   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:30.186580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:30.214921   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:30.214950   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:30.242668   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:30.242697   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:30.319413   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:30.319445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:30.419178   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:30.419215   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:30.431724   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:30.431753   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:30.501053   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:30.492849    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.493577    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495362    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495976    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.497562    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:30.492849    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.493577    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495362    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.495976    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:30.497562    3776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:30.501078   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:30.501092   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:30.532550   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:30.532577   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:33.076374   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:33.087831   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:33.087899   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:33.126218   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:33.126241   92925 cri.go:89] found id: ""
	I1213 19:11:33.126251   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:33.126315   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.130647   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:33.130731   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:33.158982   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:33.159013   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:33.159020   92925 cri.go:89] found id: ""
	I1213 19:11:33.159028   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:33.159094   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.162984   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.166562   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:33.166635   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:33.193330   92925 cri.go:89] found id: ""
	I1213 19:11:33.193353   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.193361   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:33.193367   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:33.193423   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:33.221129   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:33.221153   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:33.221159   92925 cri.go:89] found id: ""
	I1213 19:11:33.221166   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:33.221239   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.225797   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.229503   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:33.229615   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:33.257761   92925 cri.go:89] found id: ""
	I1213 19:11:33.257786   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.257795   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:33.257802   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:33.257865   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:33.285915   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:33.285941   92925 cri.go:89] found id: "2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:33.285957   92925 cri.go:89] found id: ""
	I1213 19:11:33.285968   92925 logs.go:282] 2 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d]
	I1213 19:11:33.286026   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.289819   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:33.293581   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:33.293655   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:33.324324   92925 cri.go:89] found id: ""
	I1213 19:11:33.324348   92925 logs.go:282] 0 containers: []
	W1213 19:11:33.324357   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:33.324366   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:33.324377   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:33.350842   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:33.350913   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:33.424344   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:33.424380   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:33.452897   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:33.452930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:33.504468   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:33.504506   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:33.579150   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:33.579183   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:33.607049   92925 logs.go:123] Gathering logs for kube-controller-manager [2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d] ...
	I1213 19:11:33.607076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d70d563232aaab7cf1652029bf88126f71c86da3849faf9fdb0585f9a38aa6d"
	I1213 19:11:33.633297   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:33.633326   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:33.668670   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:33.668699   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:33.766904   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:33.766936   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:33.780538   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:33.780567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:33.857253   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:33.848822    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.849778    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851312    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851759    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.853392    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:33.848822    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.849778    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851312    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.851759    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:33.853392    3937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:33.857275   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:33.857290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.398970   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:36.410341   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:36.410416   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:36.438456   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:36.438479   92925 cri.go:89] found id: ""
	I1213 19:11:36.438488   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:36.438568   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.442320   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:36.442395   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:36.470092   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.470116   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:36.470121   92925 cri.go:89] found id: ""
	I1213 19:11:36.470131   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:36.470218   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.474021   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.477467   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:36.477578   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:36.505647   92925 cri.go:89] found id: ""
	I1213 19:11:36.505670   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.505714   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:36.505733   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:36.505804   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:36.537872   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:36.537895   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:36.537900   92925 cri.go:89] found id: ""
	I1213 19:11:36.537907   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:36.537961   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.541660   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.545244   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:36.545314   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:36.570195   92925 cri.go:89] found id: ""
	I1213 19:11:36.570228   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.570238   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:36.570250   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:36.570339   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:36.595894   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:36.595958   92925 cri.go:89] found id: ""
	I1213 19:11:36.595979   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:36.596064   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:36.599675   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:36.599789   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:36.624988   92925 cri.go:89] found id: ""
	I1213 19:11:36.625083   92925 logs.go:282] 0 containers: []
	W1213 19:11:36.625101   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:36.625112   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:36.625123   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:36.718891   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:36.718924   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:36.786494   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:36.778476    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.779141    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.780744    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.781242    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.782695    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:36.778476    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.779141    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.780744    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.781242    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:36.782695    4019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:36.786519   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:36.786531   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:36.828295   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:36.828328   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:36.871560   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:36.871591   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:36.941295   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:36.941335   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:37.023869   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:37.023902   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:37.055672   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:37.055700   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:37.069301   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:37.069334   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:37.098989   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:37.099015   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:37.135738   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:37.135771   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:39.664114   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:39.675928   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:39.675999   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:39.702971   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:39.702989   92925 cri.go:89] found id: ""
	I1213 19:11:39.702998   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:39.703053   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.707021   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:39.707096   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:39.733615   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:39.733637   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:39.733642   92925 cri.go:89] found id: ""
	I1213 19:11:39.733663   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:39.733720   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.737520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.740992   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:39.741107   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:39.769090   92925 cri.go:89] found id: ""
	I1213 19:11:39.769174   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.769194   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:39.769201   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:39.769351   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:39.804293   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:39.804314   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:39.804319   92925 cri.go:89] found id: ""
	I1213 19:11:39.804326   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:39.804389   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.808495   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.812181   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:39.812255   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:39.838217   92925 cri.go:89] found id: ""
	I1213 19:11:39.838243   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.838252   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:39.838259   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:39.838314   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:39.866484   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:39.866504   92925 cri.go:89] found id: ""
	I1213 19:11:39.866512   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:39.866567   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:39.870814   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:39.870885   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:39.908207   92925 cri.go:89] found id: ""
	I1213 19:11:39.908233   92925 logs.go:282] 0 containers: []
	W1213 19:11:39.908243   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:39.908252   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:39.908264   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:39.920472   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:39.920499   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:39.948910   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:39.948951   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:40.012782   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:40.012825   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:40.047267   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:40.047297   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:40.129790   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:40.129871   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:40.168487   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:40.168519   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:40.269381   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:40.269456   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:40.338885   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:40.330165    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.330955    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333137    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333832    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.335154    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:40.330165    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.330955    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333137    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.333832    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:40.335154    4198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:40.338906   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:40.338919   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:40.394986   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:40.395024   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:40.460751   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:40.460799   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:42.992519   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:43.004031   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:43.004110   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:43.032556   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:43.032578   92925 cri.go:89] found id: ""
	I1213 19:11:43.032586   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:43.032640   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.036332   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:43.036401   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:43.065252   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:43.065282   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:43.065288   92925 cri.go:89] found id: ""
	I1213 19:11:43.065296   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:43.065358   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.070007   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.074047   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:43.074122   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:43.108141   92925 cri.go:89] found id: ""
	I1213 19:11:43.108169   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.108181   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:43.108188   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:43.108248   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:43.139539   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:43.139560   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:43.139566   92925 cri.go:89] found id: ""
	I1213 19:11:43.139574   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:43.139629   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.143534   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.147218   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:43.147292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:43.175751   92925 cri.go:89] found id: ""
	I1213 19:11:43.175825   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.175849   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:43.175868   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:43.175952   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:43.200994   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:43.201062   92925 cri.go:89] found id: ""
	I1213 19:11:43.201072   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:43.201127   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:43.204988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:43.205128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:43.231895   92925 cri.go:89] found id: ""
	I1213 19:11:43.231922   92925 logs.go:282] 0 containers: []
	W1213 19:11:43.231946   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:43.231955   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:43.231968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:43.272192   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:43.272228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:43.334615   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:43.334650   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:43.366125   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:43.366153   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:43.397225   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:43.397254   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:43.468828   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:43.460439    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.461076    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.462731    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.463290    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.464964    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:43.460439    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.461076    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.462731    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.463290    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:43.464964    4331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:43.468856   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:43.468869   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:43.519337   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:43.519376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:43.552934   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:43.552963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:43.636492   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:43.636526   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:43.735496   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:43.735529   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:43.748666   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:43.748693   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:46.276009   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:46.287459   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:46.287539   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:46.315787   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:46.315809   92925 cri.go:89] found id: ""
	I1213 19:11:46.315817   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:46.315881   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.319776   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:46.319870   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:46.349638   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:46.349701   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:46.349721   92925 cri.go:89] found id: ""
	I1213 19:11:46.349737   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:46.349810   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.353770   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.357319   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:46.357391   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:46.387852   92925 cri.go:89] found id: ""
	I1213 19:11:46.387879   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.387888   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:46.387895   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:46.387956   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:46.415327   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:46.415351   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:46.415362   92925 cri.go:89] found id: ""
	I1213 19:11:46.415369   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:46.415425   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.420351   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.423877   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:46.423945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:46.452445   92925 cri.go:89] found id: ""
	I1213 19:11:46.452471   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.452480   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:46.452487   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:46.452543   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:46.488306   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:46.488329   92925 cri.go:89] found id: ""
	I1213 19:11:46.488337   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:46.488393   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:46.492372   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:46.492477   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:46.531601   92925 cri.go:89] found id: ""
	I1213 19:11:46.531625   92925 logs.go:282] 0 containers: []
	W1213 19:11:46.531635   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:46.531644   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:46.531656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:46.576619   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:46.576653   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:46.637968   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:46.638005   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:46.666074   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:46.666103   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:46.699911   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:46.699988   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:46.741837   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:46.741889   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:46.771703   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:46.771729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:46.848202   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:46.848240   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:46.949628   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:46.949664   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:46.963040   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:46.963071   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:47.045784   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:47.037108    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.038507    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.039621    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.040561    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.042097    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:47.037108    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.038507    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.039621    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.040561    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:47.042097    4491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:47.045805   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:47.045818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.573745   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:49.584944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:49.585049   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:49.612421   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.612440   92925 cri.go:89] found id: ""
	I1213 19:11:49.612448   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:49.612503   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.616771   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:49.616842   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:49.644250   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:49.644313   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:49.644342   92925 cri.go:89] found id: ""
	I1213 19:11:49.644365   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:49.644448   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.648357   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.652087   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:49.652211   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:49.678765   92925 cri.go:89] found id: ""
	I1213 19:11:49.678790   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.678798   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:49.678804   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:49.678882   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:49.707013   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:49.707082   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:49.707102   92925 cri.go:89] found id: ""
	I1213 19:11:49.707128   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:49.707219   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.711513   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.715226   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:49.715321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:49.741306   92925 cri.go:89] found id: ""
	I1213 19:11:49.741375   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.741401   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:49.741421   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:49.741505   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:49.768427   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:49.768451   92925 cri.go:89] found id: ""
	I1213 19:11:49.768459   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:49.768517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:49.772356   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:49.772478   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:49.801564   92925 cri.go:89] found id: ""
	I1213 19:11:49.801633   92925 logs.go:282] 0 containers: []
	W1213 19:11:49.801659   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:49.801687   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:49.801725   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:49.827233   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:49.827261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:49.884809   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:49.884846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:49.911980   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:49.912011   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:49.938143   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:49.938174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:49.951851   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:49.951880   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:49.992816   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:49.992861   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:50.064112   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:50.064149   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:50.149808   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:50.149847   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:50.182876   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:50.182907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:50.285831   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:50.285868   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:50.357682   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:50.350098    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.350586    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.351793    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.352420    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.354169    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:50.350098    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.350586    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.351793    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.352420    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:50.354169    4633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:52.858319   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:52.869473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:52.869548   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:52.897144   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:52.897169   92925 cri.go:89] found id: ""
	I1213 19:11:52.897177   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:52.897234   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.900973   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:52.901074   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:52.928815   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:52.928842   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:52.928847   92925 cri.go:89] found id: ""
	I1213 19:11:52.928855   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:52.928912   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.932785   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.936853   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:52.936928   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:52.963913   92925 cri.go:89] found id: ""
	I1213 19:11:52.963940   92925 logs.go:282] 0 containers: []
	W1213 19:11:52.963949   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:52.963954   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:52.964018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:52.993621   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:52.993685   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:52.993705   92925 cri.go:89] found id: ""
	I1213 19:11:52.993730   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:52.993820   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:52.997612   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:53.001214   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:53.001293   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:53.032707   92925 cri.go:89] found id: ""
	I1213 19:11:53.032733   92925 logs.go:282] 0 containers: []
	W1213 19:11:53.032742   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:53.032749   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:53.032812   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:53.059757   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:53.059780   92925 cri.go:89] found id: ""
	I1213 19:11:53.059805   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:53.059860   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:53.063600   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:53.063673   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:53.091179   92925 cri.go:89] found id: ""
	I1213 19:11:53.091248   92925 logs.go:282] 0 containers: []
	W1213 19:11:53.091286   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:53.091303   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:53.091316   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:53.123301   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:53.123391   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:53.196598   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:53.196634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:53.227689   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:53.227715   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:53.327870   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:53.327905   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:53.343261   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:53.343290   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:53.371058   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:53.371089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:53.418862   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:53.418896   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:53.475787   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:53.475822   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:53.507061   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:53.507090   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:53.584040   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:53.575651    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.576367    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.577874    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.578518    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.580190    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:53.575651    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.576367    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.577874    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.578518    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:53.580190    4763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:53.584063   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:53.584076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.124239   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:56.136746   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:56.136818   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:56.165417   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:56.165442   92925 cri.go:89] found id: ""
	I1213 19:11:56.165451   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:56.165513   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.169272   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:56.169348   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:56.198281   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.198304   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:56.198309   92925 cri.go:89] found id: ""
	I1213 19:11:56.198316   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:56.198370   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.202310   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.206597   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:56.206670   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:56.233152   92925 cri.go:89] found id: ""
	I1213 19:11:56.233179   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.233189   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:56.233195   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:56.233259   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:56.263980   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:56.264000   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:56.264005   92925 cri.go:89] found id: ""
	I1213 19:11:56.264013   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:56.264071   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.268409   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.272169   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:56.272245   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:56.307136   92925 cri.go:89] found id: ""
	I1213 19:11:56.307163   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.307173   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:56.307179   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:56.307237   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:56.335595   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:56.335618   92925 cri.go:89] found id: ""
	I1213 19:11:56.335626   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:56.335684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:56.339317   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:56.339388   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:56.365740   92925 cri.go:89] found id: ""
	I1213 19:11:56.365763   92925 logs.go:282] 0 containers: []
	W1213 19:11:56.365773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:56.365782   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:11:56.365795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:56.392684   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:11:56.392715   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:56.443884   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:11:56.443916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:56.470931   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:56.471007   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:56.498493   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:56.498569   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:56.594275   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:56.594325   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:56.697865   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:11:56.697902   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:11:56.710803   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:56.710833   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:56.774588   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:56.766250    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.767127    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.768759    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.769116    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.770766    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:56.766250    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.767127    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.768759    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.769116    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:56.770766    4883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:11:56.774608   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:11:56.774621   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:56.822318   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:11:56.822354   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:56.879404   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:56.879440   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:59.418085   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:11:59.429523   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:11:59.429599   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:11:59.459140   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:11:59.459164   92925 cri.go:89] found id: ""
	I1213 19:11:59.459173   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:11:59.459250   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.463131   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:11:59.463231   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:11:59.491515   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:11:59.491539   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:11:59.491544   92925 cri.go:89] found id: ""
	I1213 19:11:59.491552   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:11:59.491650   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.495555   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.499043   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:11:59.499118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:11:59.542670   92925 cri.go:89] found id: ""
	I1213 19:11:59.542745   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.542771   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:11:59.542785   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:11:59.542861   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:11:59.569926   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:11:59.569950   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:11:59.569954   92925 cri.go:89] found id: ""
	I1213 19:11:59.569962   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:11:59.570030   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.574242   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.578071   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:11:59.578177   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:11:59.610686   92925 cri.go:89] found id: ""
	I1213 19:11:59.610714   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.610723   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:11:59.610729   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:11:59.610789   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:11:59.639587   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:59.639641   92925 cri.go:89] found id: ""
	I1213 19:11:59.639659   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:11:59.639720   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:11:59.644316   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:11:59.644404   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:11:59.672619   92925 cri.go:89] found id: ""
	I1213 19:11:59.672644   92925 logs.go:282] 0 containers: []
	W1213 19:11:59.672653   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:11:59.672663   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:11:59.672684   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:11:59.700144   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:11:59.700172   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:11:59.777808   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:11:59.777856   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:11:59.811078   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:11:59.811111   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:11:59.910789   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:11:59.910827   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:11:59.987053   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:11:59.975650    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.976469    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.977682    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.978310    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.979849    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:11:59.975650    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.976469    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.977682    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.978310    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:11:59.979849    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:00.003642   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:00.003687   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:00.194711   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:00.194803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:00.357297   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:00.357336   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:00.438487   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:00.438580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:00.454845   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:00.454880   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:00.564592   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:00.564633   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.112543   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:03.123663   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:03.123738   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:03.157514   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:03.157538   92925 cri.go:89] found id: ""
	I1213 19:12:03.157546   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:03.157601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.161756   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:03.161829   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:03.187867   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:03.187887   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:03.187892   92925 cri.go:89] found id: ""
	I1213 19:12:03.187900   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:03.187954   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.191586   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.195089   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:03.195186   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:03.227702   92925 cri.go:89] found id: ""
	I1213 19:12:03.227727   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.227736   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:03.227742   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:03.227802   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:03.254539   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:03.254561   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.254566   92925 cri.go:89] found id: ""
	I1213 19:12:03.254574   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:03.254653   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.258434   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.262232   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:03.262309   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:03.293528   92925 cri.go:89] found id: ""
	I1213 19:12:03.293552   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.293561   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:03.293567   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:03.293627   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:03.324573   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:03.324595   92925 cri.go:89] found id: ""
	I1213 19:12:03.324603   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:03.324655   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:03.328400   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:03.328469   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:03.354317   92925 cri.go:89] found id: ""
	I1213 19:12:03.354342   92925 logs.go:282] 0 containers: []
	W1213 19:12:03.354351   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:03.354362   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:03.354376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:03.416520   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:03.416559   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:03.443937   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:03.443966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:03.520631   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:03.520669   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:03.539545   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:03.539575   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:03.609658   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:03.599495    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.600262    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.602170    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604093    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604836    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:03.599495    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.600262    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.602170    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604093    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:03.604836    5140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:03.609679   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:03.609691   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:03.641994   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:03.642021   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:03.683262   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:03.683296   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:03.711455   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:03.711486   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:03.742963   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:03.742994   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:03.842936   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:03.842971   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.387950   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:06.398757   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:06.398838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:06.427281   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:06.427343   92925 cri.go:89] found id: ""
	I1213 19:12:06.427359   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:06.427424   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.431296   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:06.431370   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:06.458047   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:06.458069   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.458073   92925 cri.go:89] found id: ""
	I1213 19:12:06.458081   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:06.458138   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.461822   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.466010   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:06.466084   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:06.504515   92925 cri.go:89] found id: ""
	I1213 19:12:06.504542   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.504551   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:06.504560   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:06.504621   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:06.541478   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:06.541501   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:06.541506   92925 cri.go:89] found id: ""
	I1213 19:12:06.541514   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:06.541576   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.545645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.549634   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:06.549704   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:06.576630   92925 cri.go:89] found id: ""
	I1213 19:12:06.576698   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.576724   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:06.576744   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:06.576832   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:06.604207   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:06.604229   92925 cri.go:89] found id: ""
	I1213 19:12:06.604237   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:06.604298   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:06.608117   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:06.608232   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:06.634291   92925 cri.go:89] found id: ""
	I1213 19:12:06.634362   92925 logs.go:282] 0 containers: []
	W1213 19:12:06.634379   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:06.634388   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:06.634402   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:06.696997   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:06.697085   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:06.756705   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:06.756741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:06.836493   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:06.836525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:06.936663   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:06.936700   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:06.949180   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:06.949212   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:07.020703   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:07.012352    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.013247    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.014825    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.015260    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.016747    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:07.012352    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.013247    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.014825    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.015260    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:07.016747    5275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:07.020728   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:07.020741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:07.052354   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:07.052383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:07.079834   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:07.079865   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:07.119690   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:07.119720   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:07.146357   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:07.146385   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:09.686883   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:09.697849   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:09.697924   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:09.724282   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:09.724307   92925 cri.go:89] found id: ""
	I1213 19:12:09.724316   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:09.724374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.727853   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:09.727929   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:09.757294   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:09.757315   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:09.757320   92925 cri.go:89] found id: ""
	I1213 19:12:09.757328   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:09.757383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.761291   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.764680   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:09.764755   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:09.791939   92925 cri.go:89] found id: ""
	I1213 19:12:09.791964   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.791974   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:09.791979   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:09.792059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:09.819349   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:09.819415   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:09.819435   92925 cri.go:89] found id: ""
	I1213 19:12:09.819460   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:09.819540   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.823580   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.827023   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:09.827138   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:09.857888   92925 cri.go:89] found id: ""
	I1213 19:12:09.857966   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.857990   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:09.858001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:09.858066   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:09.884350   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:09.884373   92925 cri.go:89] found id: ""
	I1213 19:12:09.884381   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:09.884438   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:09.888641   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:09.888720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:09.915592   92925 cri.go:89] found id: ""
	I1213 19:12:09.915614   92925 logs.go:282] 0 containers: []
	W1213 19:12:09.915623   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:09.915632   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:09.915644   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:09.941582   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:09.941614   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:10.002342   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:10.002377   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:10.031301   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:10.031336   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:10.071296   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:10.071332   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:10.123567   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:10.123605   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:10.157428   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:10.157457   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:10.238347   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:10.238426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:10.334563   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:10.334598   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:10.347255   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:10.347286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:10.432160   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:10.423156    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.423973    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.425617    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.426254    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.428070    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:10.423156    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.423973    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.425617    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.426254    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:10.428070    5451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:10.432226   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:10.432252   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:12.994728   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:13.005943   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:13.006017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:13.033581   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:13.033602   92925 cri.go:89] found id: ""
	I1213 19:12:13.033610   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:13.033689   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.037439   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:13.037531   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:13.069482   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:13.069506   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:13.069511   92925 cri.go:89] found id: ""
	I1213 19:12:13.069520   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:13.069579   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.073384   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.077179   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:13.077250   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:13.117434   92925 cri.go:89] found id: ""
	I1213 19:12:13.117508   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.117525   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:13.117532   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:13.117603   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:13.151113   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:13.151191   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:13.151211   92925 cri.go:89] found id: ""
	I1213 19:12:13.151235   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:13.151330   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.155305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.159267   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:13.159375   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:13.193156   92925 cri.go:89] found id: ""
	I1213 19:12:13.193183   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.193191   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:13.193197   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:13.193303   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:13.228192   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:13.228272   92925 cri.go:89] found id: ""
	I1213 19:12:13.228304   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:13.228385   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:13.232149   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:13.232270   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:13.265793   92925 cri.go:89] found id: ""
	I1213 19:12:13.265868   92925 logs.go:282] 0 containers: []
	W1213 19:12:13.265892   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:13.265914   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:13.265974   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:13.298247   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:13.298332   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:13.338944   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:13.338977   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:13.398561   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:13.398600   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:13.426862   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:13.426891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:13.526771   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:13.526807   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:13.539556   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:13.539587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:13.606738   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:13.598805    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.599569    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.600660    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.601348    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.602977    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:13.598805    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.599569    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.600660    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.601348    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:13.602977    5571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:13.606761   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:13.606777   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:13.632299   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:13.632367   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:13.681186   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:13.681224   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:13.715711   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:13.715741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:16.289974   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:16.301720   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:16.301794   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:16.333180   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:16.333203   92925 cri.go:89] found id: ""
	I1213 19:12:16.333211   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:16.333262   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.337163   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:16.337233   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:16.366808   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:16.366829   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:16.366834   92925 cri.go:89] found id: ""
	I1213 19:12:16.366841   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:16.366897   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.370643   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.374381   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:16.374453   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:16.402639   92925 cri.go:89] found id: ""
	I1213 19:12:16.402663   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.402672   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:16.402678   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:16.402735   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:16.429862   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:16.429927   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:16.429948   92925 cri.go:89] found id: ""
	I1213 19:12:16.429971   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:16.430057   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.437586   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.443620   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:16.443739   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:16.468889   92925 cri.go:89] found id: ""
	I1213 19:12:16.468915   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.468933   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:16.468940   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:16.469002   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:16.497884   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:16.497952   92925 cri.go:89] found id: ""
	I1213 19:12:16.497975   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:16.498065   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:16.501907   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:16.502017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:16.528833   92925 cri.go:89] found id: ""
	I1213 19:12:16.528861   92925 logs.go:282] 0 containers: []
	W1213 19:12:16.528871   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:16.528880   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:16.528891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:16.571970   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:16.572003   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:16.599399   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:16.599433   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:16.626668   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:16.626698   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:16.657476   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:16.657505   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:16.756171   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:16.756207   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:16.768558   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:16.768587   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:16.841002   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:16.841041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:16.913877   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:16.913951   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:17.002296   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:16.981549    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.983800    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.984559    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.987461    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.988234    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:16.981549    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.983800    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.984559    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.987461    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:16.988234    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:17.002364   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:17.002385   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:17.029940   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:17.029968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.576739   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:19.587975   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:19.588041   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:19.614817   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:19.614840   92925 cri.go:89] found id: ""
	I1213 19:12:19.614848   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:19.614903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.618582   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:19.618679   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:19.651398   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.651419   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:19.651424   92925 cri.go:89] found id: ""
	I1213 19:12:19.651432   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:19.651501   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.655392   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.659059   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:19.659134   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:19.684221   92925 cri.go:89] found id: ""
	I1213 19:12:19.684247   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.684257   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:19.684264   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:19.684323   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:19.711198   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:19.711220   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:19.711226   92925 cri.go:89] found id: ""
	I1213 19:12:19.711233   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:19.711289   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.715680   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.719221   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:19.719292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:19.751237   92925 cri.go:89] found id: ""
	I1213 19:12:19.751286   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.751296   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:19.751303   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:19.751371   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:19.778300   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:19.778321   92925 cri.go:89] found id: ""
	I1213 19:12:19.778330   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:19.778413   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:19.782520   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:19.782614   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:19.814477   92925 cri.go:89] found id: ""
	I1213 19:12:19.814507   92925 logs.go:282] 0 containers: []
	W1213 19:12:19.814517   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:19.814526   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:19.814558   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:19.855891   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:19.855922   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:19.917648   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:19.917687   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:19.949548   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:19.949574   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:19.976644   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:19.976680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:20.064988   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:20.065042   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:20.114742   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:20.114776   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:20.220028   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:20.220066   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:20.232673   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:20.232703   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:20.314099   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:20.305597    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.306343    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308133    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308739    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.310382    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:20.305597    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.306343    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308133    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.308739    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:20.310382    5860 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:20.314125   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:20.314142   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:20.358618   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:20.358649   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:22.884692   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:22.896642   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:22.896714   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:22.925894   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:22.925919   92925 cri.go:89] found id: ""
	I1213 19:12:22.925928   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:22.925982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.929556   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:22.929630   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:22.957310   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:22.957375   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:22.957393   92925 cri.go:89] found id: ""
	I1213 19:12:22.957419   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:22.957496   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.961230   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:22.964927   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:22.965122   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:22.993901   92925 cri.go:89] found id: ""
	I1213 19:12:22.993974   92925 logs.go:282] 0 containers: []
	W1213 19:12:22.994000   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:22.994012   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:22.994092   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:23.021087   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:23.021112   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:23.021117   92925 cri.go:89] found id: ""
	I1213 19:12:23.021123   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:23.021179   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.025414   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.029044   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:23.029147   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:23.054815   92925 cri.go:89] found id: ""
	I1213 19:12:23.054840   92925 logs.go:282] 0 containers: []
	W1213 19:12:23.054848   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:23.054855   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:23.054913   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:23.080286   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:23.080312   92925 cri.go:89] found id: ""
	I1213 19:12:23.080320   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:23.080407   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:23.084274   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:23.084375   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:23.115727   92925 cri.go:89] found id: ""
	I1213 19:12:23.115750   92925 logs.go:282] 0 containers: []
	W1213 19:12:23.115758   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:23.115767   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:23.115796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:23.194830   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:23.186405    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.187281    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.188756    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.189379    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.191250    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:23.186405    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.187281    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.188756    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.189379    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:23.191250    5944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:23.194890   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:23.194911   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:23.234766   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:23.234801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:23.282930   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:23.282966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:23.352028   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:23.352067   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:23.379340   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:23.379418   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:23.425558   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:23.425589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:23.453170   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:23.453198   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:23.484993   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:23.485089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:23.575060   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:23.575093   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:23.676623   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:23.676658   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:26.191200   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:26.202087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:26.202208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:26.237575   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:26.237607   92925 cri.go:89] found id: ""
	I1213 19:12:26.237616   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:26.237685   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.242604   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:26.242726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:26.275657   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:26.275680   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:26.275687   92925 cri.go:89] found id: ""
	I1213 19:12:26.275696   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:26.275774   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.279747   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.283677   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:26.283784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:26.312109   92925 cri.go:89] found id: ""
	I1213 19:12:26.312185   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.312219   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:26.312239   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:26.312329   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:26.342409   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:26.342432   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:26.342437   92925 cri.go:89] found id: ""
	I1213 19:12:26.342445   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:26.342500   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.346485   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.350281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:26.350365   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:26.375751   92925 cri.go:89] found id: ""
	I1213 19:12:26.375775   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.375783   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:26.375790   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:26.375864   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:26.401584   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:26.401607   92925 cri.go:89] found id: ""
	I1213 19:12:26.401614   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:26.401686   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:26.405294   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:26.405373   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:26.433390   92925 cri.go:89] found id: ""
	I1213 19:12:26.433467   92925 logs.go:282] 0 containers: []
	W1213 19:12:26.433491   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:26.433507   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:26.433533   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:26.493265   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:26.493305   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:26.528279   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:26.528307   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:26.612530   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:26.612565   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:26.625201   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:26.625231   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:26.695921   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:26.686948    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.687827    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.689491    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.690111    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.691852    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:26.686948    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.687827    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.689491    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.690111    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:26.691852    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:26.695942   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:26.695955   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:26.721367   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:26.721436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:26.747790   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:26.747818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:26.778783   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:26.778813   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:26.875307   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:26.875341   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:26.926065   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:26.926104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.471412   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:29.482208   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:29.482279   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:29.518089   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:29.518111   92925 cri.go:89] found id: ""
	I1213 19:12:29.518120   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:29.518179   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.522151   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:29.522316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:29.550522   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:29.550548   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.550553   92925 cri.go:89] found id: ""
	I1213 19:12:29.550561   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:29.550614   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.554476   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.557855   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:29.557927   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:29.585314   92925 cri.go:89] found id: ""
	I1213 19:12:29.585337   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.585346   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:29.585352   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:29.585415   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:29.613061   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:29.613081   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:29.613087   92925 cri.go:89] found id: ""
	I1213 19:12:29.613094   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:29.613149   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.617383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.621127   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:29.621198   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:29.648388   92925 cri.go:89] found id: ""
	I1213 19:12:29.648415   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.648425   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:29.648434   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:29.648493   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:29.675800   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:29.675823   92925 cri.go:89] found id: ""
	I1213 19:12:29.675832   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:29.675885   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:29.679891   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:29.679964   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:29.708415   92925 cri.go:89] found id: ""
	I1213 19:12:29.708439   92925 logs.go:282] 0 containers: []
	W1213 19:12:29.708447   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:29.708457   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:29.708469   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:29.747281   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:29.747357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:29.791340   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:29.791374   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:29.834406   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:29.834436   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:29.861132   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:29.861162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:29.962754   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:29.962831   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:29.975698   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:29.975725   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:30.136167   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:30.136206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:30.219391   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:30.219426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:30.250060   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:30.250090   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:30.324085   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:30.315913    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.316779    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318083    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318787    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.320486    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:30.315913    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.316779    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318083    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.318787    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:30.320486    6276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:30.324108   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:30.324122   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:32.849129   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:32.861076   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:32.861146   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:32.890816   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:32.890837   92925 cri.go:89] found id: ""
	I1213 19:12:32.890845   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:32.890899   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.894607   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:32.894684   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:32.925830   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:32.925856   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:32.925861   92925 cri.go:89] found id: ""
	I1213 19:12:32.925868   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:32.925921   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.929582   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.932913   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:32.932983   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:32.959171   92925 cri.go:89] found id: ""
	I1213 19:12:32.959199   92925 logs.go:282] 0 containers: []
	W1213 19:12:32.959208   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:32.959214   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:32.959319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:32.993282   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:32.993309   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:32.993315   92925 cri.go:89] found id: ""
	I1213 19:12:32.993331   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:32.993393   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:32.997923   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:33.002009   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:33.002111   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:33.029187   92925 cri.go:89] found id: ""
	I1213 19:12:33.029210   92925 logs.go:282] 0 containers: []
	W1213 19:12:33.029219   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:33.029225   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:33.029333   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:33.057252   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:33.057287   92925 cri.go:89] found id: ""
	I1213 19:12:33.057296   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:33.057360   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:33.061234   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:33.061340   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:33.089861   92925 cri.go:89] found id: ""
	I1213 19:12:33.089889   92925 logs.go:282] 0 containers: []
	W1213 19:12:33.089898   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:33.089907   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:33.089919   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:33.108679   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:33.108710   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:33.162722   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:33.162768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:33.227823   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:33.227861   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:33.260183   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:33.260210   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:33.286847   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:33.286872   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:33.368228   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:33.368263   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:33.475747   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:33.475786   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:33.554192   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:33.546124    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.546992    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.548557    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.549128    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.550628    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:33.546124    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.546992    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.548557    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.549128    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:33.550628    6395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:33.554212   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:33.554225   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:33.579823   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:33.579850   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:33.623777   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:33.623815   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:36.157314   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:36.168502   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:36.168576   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:36.196421   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:36.196442   92925 cri.go:89] found id: ""
	I1213 19:12:36.196451   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:36.196511   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.200568   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:36.200636   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:36.227300   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:36.227324   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:36.227331   92925 cri.go:89] found id: ""
	I1213 19:12:36.227338   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:36.227396   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.231459   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.235239   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:36.235316   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:36.268611   92925 cri.go:89] found id: ""
	I1213 19:12:36.268635   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.268644   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:36.268650   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:36.268731   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:36.308479   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:36.308576   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:36.308597   92925 cri.go:89] found id: ""
	I1213 19:12:36.308642   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:36.308738   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.312547   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.316077   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:36.316189   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:36.342346   92925 cri.go:89] found id: ""
	I1213 19:12:36.342382   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.342392   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:36.342414   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:36.342496   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:36.368808   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:36.368834   92925 cri.go:89] found id: ""
	I1213 19:12:36.368844   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:36.368899   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:36.372705   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:36.372790   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:36.399760   92925 cri.go:89] found id: ""
	I1213 19:12:36.399796   92925 logs.go:282] 0 containers: []
	W1213 19:12:36.399805   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:36.399817   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:36.399829   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:36.497016   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:36.497097   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:36.511432   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:36.511552   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:36.587222   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:36.577960    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.578711    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.580805    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.581572    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.583427    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:36.577960    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.578711    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.580805    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.581572    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:36.583427    6497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:36.587247   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:36.587262   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:36.630739   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:36.630774   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:36.683440   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:36.683473   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:36.751190   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:36.751241   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:36.779744   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:36.779833   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:36.806180   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:36.806206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:36.832449   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:36.832475   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:36.910859   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:36.910900   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:39.441151   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:39.452365   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:39.452439   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:39.484411   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:39.484436   92925 cri.go:89] found id: ""
	I1213 19:12:39.484444   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:39.484499   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.488316   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:39.488390   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:39.519236   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:39.519263   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:39.519268   92925 cri.go:89] found id: ""
	I1213 19:12:39.519277   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:39.519331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.523340   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.529308   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:39.529377   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:39.559339   92925 cri.go:89] found id: ""
	I1213 19:12:39.559405   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.559437   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:39.559456   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:39.559543   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:39.589737   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:39.589769   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:39.589775   92925 cri.go:89] found id: ""
	I1213 19:12:39.589783   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:39.589848   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.593976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.598330   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:39.598421   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:39.631670   92925 cri.go:89] found id: ""
	I1213 19:12:39.631699   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.631708   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:39.631714   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:39.631783   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:39.662738   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:39.662803   92925 cri.go:89] found id: ""
	I1213 19:12:39.662824   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:39.662906   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:39.666773   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:39.666867   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:39.695600   92925 cri.go:89] found id: ""
	I1213 19:12:39.695627   92925 logs.go:282] 0 containers: []
	W1213 19:12:39.695637   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:39.695646   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:39.695658   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:39.787866   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:39.787904   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:39.864556   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:39.853140    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.856488    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.857226    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.858708    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.859314    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:39.853140    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.856488    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.857226    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.858708    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:39.859314    6628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:39.864580   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:39.864594   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:39.893552   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:39.893593   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:39.935040   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:39.935070   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:39.977962   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:39.977992   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:40.052674   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:40.052713   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:40.145597   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:40.145709   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:40.181340   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:40.181368   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:40.194929   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:40.194999   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:40.222595   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:40.222665   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:42.749068   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:42.760019   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:42.760098   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:42.790868   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:42.790891   92925 cri.go:89] found id: ""
	I1213 19:12:42.790898   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:42.790953   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.794682   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:42.794770   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:42.823001   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:42.823024   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:42.823029   92925 cri.go:89] found id: ""
	I1213 19:12:42.823036   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:42.823102   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.826966   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.830581   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:42.830667   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:42.857298   92925 cri.go:89] found id: ""
	I1213 19:12:42.857325   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.857334   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:42.857340   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:42.857402   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:42.888499   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:42.888524   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:42.888528   92925 cri.go:89] found id: ""
	I1213 19:12:42.888535   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:42.888601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.894724   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.898823   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:42.898944   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:42.925225   92925 cri.go:89] found id: ""
	I1213 19:12:42.925262   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.925271   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:42.925277   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:42.925363   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:42.954151   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:42.954186   92925 cri.go:89] found id: ""
	I1213 19:12:42.954195   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:42.954262   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:42.958191   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:42.958256   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:42.997632   92925 cri.go:89] found id: ""
	I1213 19:12:42.997699   92925 logs.go:282] 0 containers: []
	W1213 19:12:42.997722   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:42.997738   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:42.997750   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:43.044934   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:43.044968   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:43.130707   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:43.130787   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:43.162064   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:43.162196   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:43.174781   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:43.174807   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:43.248282   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:43.239057    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.239785    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.241456    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.242060    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.243778    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:43.239057    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.239785    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.241456    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.242060    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:43.243778    6792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:43.248309   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:43.248322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:43.292697   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:43.292729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:43.326878   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:43.326906   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:43.402321   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:43.402356   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:43.434630   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:43.434662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:43.547901   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:43.547940   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.074896   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:46.086088   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:46.086156   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:46.138954   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.138977   92925 cri.go:89] found id: ""
	I1213 19:12:46.138985   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:46.139041   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.142934   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:46.143008   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:46.167983   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:46.168008   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:46.168014   92925 cri.go:89] found id: ""
	I1213 19:12:46.168022   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:46.168083   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.172203   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.176085   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:46.176164   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:46.206474   92925 cri.go:89] found id: ""
	I1213 19:12:46.206501   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.206509   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:46.206515   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:46.206572   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:46.232990   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:46.233047   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:46.233052   92925 cri.go:89] found id: ""
	I1213 19:12:46.233059   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:46.233121   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.236960   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.241098   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:46.241171   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:46.277846   92925 cri.go:89] found id: ""
	I1213 19:12:46.277872   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.277881   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:46.277886   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:46.277945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:46.306293   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:46.306316   92925 cri.go:89] found id: ""
	I1213 19:12:46.306324   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:46.306383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:46.310146   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:46.310220   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:46.337703   92925 cri.go:89] found id: ""
	I1213 19:12:46.337728   92925 logs.go:282] 0 containers: []
	W1213 19:12:46.337737   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:46.337746   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:46.337757   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:46.433354   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:46.433391   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:46.446062   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:46.446089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:46.474866   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:46.474894   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:46.518894   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:46.518972   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:46.584190   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:46.584221   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:46.612728   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:46.612798   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:46.693365   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:46.693401   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:46.730005   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:46.730036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:46.805821   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:46.797250    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.797857    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799401    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799906    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.801867    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:46.797250    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.797857    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799401    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.799906    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:46.801867    6951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:46.805844   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:46.805858   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:46.849142   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:46.849180   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.377325   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:49.388007   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:49.388073   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:49.414745   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:49.414768   92925 cri.go:89] found id: ""
	I1213 19:12:49.414777   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:49.414831   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.418502   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:49.418579   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:49.443751   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:49.443772   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:49.443777   92925 cri.go:89] found id: ""
	I1213 19:12:49.443784   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:49.443864   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.447524   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.450957   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:49.451025   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:49.478284   92925 cri.go:89] found id: ""
	I1213 19:12:49.478309   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.478318   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:49.478324   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:49.478383   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:49.506581   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:49.506604   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:49.506609   92925 cri.go:89] found id: ""
	I1213 19:12:49.506617   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:49.506673   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.513976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.518489   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:49.518567   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:49.545961   92925 cri.go:89] found id: ""
	I1213 19:12:49.545986   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.545995   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:49.546001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:49.546072   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:49.579946   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.579974   92925 cri.go:89] found id: ""
	I1213 19:12:49.579983   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:49.580036   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:49.583648   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:49.583726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:49.610201   92925 cri.go:89] found id: ""
	I1213 19:12:49.610278   92925 logs.go:282] 0 containers: []
	W1213 19:12:49.610294   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:49.610304   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:49.610321   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:49.682958   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:49.682995   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:49.716028   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:49.716058   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:49.744220   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:49.744248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:49.783347   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:49.783379   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:49.826736   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:49.826770   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:49.860737   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:49.860767   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:49.894176   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:49.894206   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:49.978486   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:49.978525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:50.088530   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:50.088567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:50.107858   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:50.107886   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:50.186950   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:50.178748    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.179306    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.180827    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.181343    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.182902    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:50.178748    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.179306    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.180827    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.181343    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:50.182902    7100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:52.687879   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:52.700111   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:52.700185   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:52.727611   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:52.727635   92925 cri.go:89] found id: ""
	I1213 19:12:52.727643   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:52.727699   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.732611   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:52.732683   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:52.760331   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:52.760355   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:52.760361   92925 cri.go:89] found id: ""
	I1213 19:12:52.760369   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:52.760424   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.764203   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.767807   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:52.767880   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:52.794453   92925 cri.go:89] found id: ""
	I1213 19:12:52.794528   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.794552   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:52.794571   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:52.794662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:52.824938   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:52.825046   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:52.825077   92925 cri.go:89] found id: ""
	I1213 19:12:52.825108   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:52.825170   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.828865   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.832644   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:52.832718   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:52.860489   92925 cri.go:89] found id: ""
	I1213 19:12:52.860512   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.860521   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:52.860527   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:52.860588   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:52.886828   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:52.886862   92925 cri.go:89] found id: ""
	I1213 19:12:52.886872   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:52.886940   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:52.890986   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:52.891106   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:52.917681   92925 cri.go:89] found id: ""
	I1213 19:12:52.917749   92925 logs.go:282] 0 containers: []
	W1213 19:12:52.917776   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:52.917799   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:52.917837   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:52.948506   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:52.948535   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:52.977936   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:52.977963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:53.041212   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:53.041249   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:53.080162   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:53.080189   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:53.174852   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:53.174897   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:53.273766   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:53.273802   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:53.285893   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:53.285925   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:53.352966   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:53.343677    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345158    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345928    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347424    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347925    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:53.343677    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345158    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.345928    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347424    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:53.347925    7212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:53.352990   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:53.353032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:53.391432   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:53.391464   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:53.451329   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:53.451363   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:55.977809   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:55.993375   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:55.993492   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:56.026972   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:56.026993   92925 cri.go:89] found id: ""
	I1213 19:12:56.027001   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:56.027059   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.031128   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:56.031204   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:56.058936   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:56.058958   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:56.058963   92925 cri.go:89] found id: ""
	I1213 19:12:56.058971   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:56.059024   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.062862   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.066757   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:56.066858   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:56.096088   92925 cri.go:89] found id: ""
	I1213 19:12:56.096112   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.096121   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:56.096134   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:56.096196   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:56.138653   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:56.138678   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:56.138683   92925 cri.go:89] found id: ""
	I1213 19:12:56.138691   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:56.138748   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.142767   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.146336   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:56.146413   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:56.176996   92925 cri.go:89] found id: ""
	I1213 19:12:56.177098   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.177115   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:56.177122   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:56.177191   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:56.206318   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:56.206341   92925 cri.go:89] found id: ""
	I1213 19:12:56.206350   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:56.206405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:56.210085   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:56.210208   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:56.240242   92925 cri.go:89] found id: ""
	I1213 19:12:56.240269   92925 logs.go:282] 0 containers: []
	W1213 19:12:56.240278   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:56.240287   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:56.240299   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:56.268772   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:56.268800   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:56.282265   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:56.282293   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:56.334697   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:56.334731   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:56.419986   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:56.420074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:56.466391   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:56.466421   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:56.578289   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:56.578327   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:56.657266   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:56.648227    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.649364    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.650885    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.651401    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.653076    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:56.648227    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.649364    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.650885    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.651401    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:56.653076    7343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:56.657289   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:56.657302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:56.685603   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:56.685631   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:56.732451   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:12:56.732487   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:56.807034   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:12:56.807068   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:59.335877   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:12:59.346983   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:12:59.347053   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:12:59.375213   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:59.375241   92925 cri.go:89] found id: ""
	I1213 19:12:59.375250   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:12:59.375308   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.379246   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:12:59.379319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:12:59.406052   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:59.406073   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:12:59.406078   92925 cri.go:89] found id: ""
	I1213 19:12:59.406085   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:12:59.406142   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.409969   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.413744   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:12:59.413813   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:12:59.440031   92925 cri.go:89] found id: ""
	I1213 19:12:59.440057   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.440066   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:12:59.440072   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:12:59.440131   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:12:59.470750   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:12:59.470770   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:12:59.470775   92925 cri.go:89] found id: ""
	I1213 19:12:59.470782   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:12:59.470836   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.474671   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.478148   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:12:59.478230   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:12:59.532301   92925 cri.go:89] found id: ""
	I1213 19:12:59.532334   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.532344   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:12:59.532350   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:12:59.532423   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:12:59.558719   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:59.558742   92925 cri.go:89] found id: ""
	I1213 19:12:59.558750   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:12:59.558814   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:12:59.562460   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:12:59.562534   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:12:59.588851   92925 cri.go:89] found id: ""
	I1213 19:12:59.588916   92925 logs.go:282] 0 containers: []
	W1213 19:12:59.588942   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:12:59.588964   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:12:59.589031   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:12:59.665993   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:12:59.666032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:12:59.712805   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:12:59.712839   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:12:59.725635   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:12:59.725688   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:12:59.797796   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:12:59.790093    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.790845    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.791906    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.792472    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.794170    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:12:59.790093    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.790845    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.791906    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.792472    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:12:59.794170    7462 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:12:59.797819   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:12:59.797831   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:12:59.825855   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:12:59.825886   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:12:59.864251   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:12:59.864286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:12:59.890125   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:12:59.890151   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:12:59.981337   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:12:59.981387   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:00.239751   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:00.239799   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:00.366187   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:00.368005   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:02.909028   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:02.919617   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:02.919732   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:02.946548   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:02.946613   92925 cri.go:89] found id: ""
	I1213 19:13:02.946629   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:02.946696   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.950448   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:02.950542   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:02.975550   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:02.975572   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:02.975577   92925 cri.go:89] found id: ""
	I1213 19:13:02.975585   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:02.975645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.979406   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:02.984704   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:02.984818   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:03.017288   92925 cri.go:89] found id: ""
	I1213 19:13:03.017311   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.017320   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:03.017334   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:03.017393   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:03.048824   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:03.048850   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:03.048857   92925 cri.go:89] found id: ""
	I1213 19:13:03.048864   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:03.048919   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.052630   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.056397   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:03.056521   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:03.088050   92925 cri.go:89] found id: ""
	I1213 19:13:03.088123   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.088146   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:03.088165   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:03.088271   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:03.119709   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:03.119778   92925 cri.go:89] found id: ""
	I1213 19:13:03.119801   92925 logs.go:282] 1 containers: [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:03.119889   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:03.127122   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:03.127274   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:03.162913   92925 cri.go:89] found id: ""
	I1213 19:13:03.162936   92925 logs.go:282] 0 containers: []
	W1213 19:13:03.162945   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:03.162953   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:03.162966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:03.207543   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:03.207579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:03.279537   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:03.279575   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:03.314034   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:03.314062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:03.394532   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:03.394567   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:03.428318   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:03.428351   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:03.528148   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:03.528187   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:03.626750   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:03.618493    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.619154    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.620764    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.621367    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.622889    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:03.618493    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.619154    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.620764    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.621367    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:03.622889    7619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:03.626775   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:03.626788   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:03.685480   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:03.685519   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:03.713856   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:03.713883   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:03.734590   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:03.734620   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:06.266879   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:06.277733   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:06.277799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:06.305175   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:06.305196   92925 cri.go:89] found id: ""
	I1213 19:13:06.305204   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:06.305258   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.308850   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:06.308928   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:06.335153   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:06.335177   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:06.335182   92925 cri.go:89] found id: ""
	I1213 19:13:06.335189   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:06.335246   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.338903   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.342418   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:06.342493   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:06.372604   92925 cri.go:89] found id: ""
	I1213 19:13:06.372632   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.372641   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:06.372646   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:06.372707   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:06.402642   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:06.402670   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:06.402675   92925 cri.go:89] found id: ""
	I1213 19:13:06.402682   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:06.402740   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.406787   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.411254   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:06.411335   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:06.437659   92925 cri.go:89] found id: ""
	I1213 19:13:06.437736   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.437751   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:06.437758   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:06.437829   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:06.466702   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:06.466725   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:06.466730   92925 cri.go:89] found id: ""
	I1213 19:13:06.466737   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:06.466793   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.470567   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:06.474150   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:06.474224   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:06.501494   92925 cri.go:89] found id: ""
	I1213 19:13:06.501569   92925 logs.go:282] 0 containers: []
	W1213 19:13:06.501594   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:06.501617   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:06.501662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:06.544779   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:06.544813   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:06.609379   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:06.609413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:06.637668   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:06.637698   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:06.664078   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:06.664105   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:06.709192   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:06.709225   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:06.737814   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:06.737845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:06.810267   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:06.810302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:06.841843   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:06.841871   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:06.938739   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:06.938776   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:06.951386   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:06.951414   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:07.032986   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:07.025075    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.025642    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027282    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027955    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.029566    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:07.025075    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.025642    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027282    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.027955    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:07.029566    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:07.033040   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:07.033053   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:09.558493   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:09.570604   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:09.570681   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:09.598108   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:09.598133   92925 cri.go:89] found id: ""
	I1213 19:13:09.598141   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:09.598197   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.602596   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:09.602673   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:09.629705   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:09.629727   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:09.629733   92925 cri.go:89] found id: ""
	I1213 19:13:09.629741   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:09.629798   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.634280   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.637817   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:09.637895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:09.665414   92925 cri.go:89] found id: ""
	I1213 19:13:09.665438   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.665447   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:09.665453   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:09.665509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:09.691729   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:09.691754   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:09.691759   92925 cri.go:89] found id: ""
	I1213 19:13:09.691766   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:09.691850   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.696064   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.700204   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:09.700308   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:09.732154   92925 cri.go:89] found id: ""
	I1213 19:13:09.732181   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.732190   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:09.732196   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:09.732277   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:09.760821   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:09.760844   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:09.760849   92925 cri.go:89] found id: ""
	I1213 19:13:09.760856   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:09.760918   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.764697   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:09.768225   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:09.768299   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:09.796678   92925 cri.go:89] found id: ""
	I1213 19:13:09.796748   92925 logs.go:282] 0 containers: []
	W1213 19:13:09.796773   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:09.796797   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:09.796844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:09.892500   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:09.892536   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:09.905527   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:09.905557   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:09.964751   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:09.964785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:10.026858   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:10.026896   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:10.095709   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:10.095747   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:10.135797   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:10.135834   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:10.207467   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:10.198321    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.199090    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.200887    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.201755    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.202624    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:10.198321    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.199090    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.200887    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.201755    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:10.202624    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:10.207502   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:10.207515   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:10.233202   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:10.233298   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:10.259818   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:10.259845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:10.286455   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:10.286482   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:10.359430   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:10.359465   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:12.894266   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:12.905675   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:12.905773   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:12.932239   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:12.932259   92925 cri.go:89] found id: ""
	I1213 19:13:12.932267   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:12.932320   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.935869   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:12.935938   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:12.961758   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:12.961778   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:12.961782   92925 cri.go:89] found id: ""
	I1213 19:13:12.961789   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:12.961846   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.965449   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:12.968967   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:12.969071   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:13.001173   92925 cri.go:89] found id: ""
	I1213 19:13:13.001203   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.001213   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:13.001219   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:13.001333   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:13.029728   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:13.029751   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:13.029756   92925 cri.go:89] found id: ""
	I1213 19:13:13.029764   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:13.029818   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.033632   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.037474   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:13.037598   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:13.064000   92925 cri.go:89] found id: ""
	I1213 19:13:13.064025   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.064034   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:13.064040   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:13.064151   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:13.092827   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:13.092847   92925 cri.go:89] found id: "27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:13.092852   92925 cri.go:89] found id: ""
	I1213 19:13:13.092859   92925 logs.go:282] 2 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7]
	I1213 19:13:13.092913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.097637   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:13.102128   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:13.102195   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:13.132820   92925 cri.go:89] found id: ""
	I1213 19:13:13.132891   92925 logs.go:282] 0 containers: []
	W1213 19:13:13.132912   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:13.132934   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:13.132976   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:13.200851   92925 logs.go:123] Gathering logs for kube-controller-manager [27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7] ...
	I1213 19:13:13.200889   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b6c088d76a4b399dd5665026596ea7636dac4d09159e62da9fcff1c2c0a9a7"
	I1213 19:13:13.232573   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:13.232603   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:13.325521   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:13.325556   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:13.338293   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:13.338324   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:13.369921   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:13.369950   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:13.416445   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:13.416477   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:13.443214   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:13.443243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:13.468415   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:13.468448   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:13.553200   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:13.553248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:13.596683   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:13.596717   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:13.678127   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:13.669907    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.670748    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672392    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672709    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.674262    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:13.669907    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.670748    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672392    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.672709    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:13.674262    8089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:13.678150   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:13.678167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.227377   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:16.238613   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:16.238685   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:16.271628   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:16.271652   92925 cri.go:89] found id: ""
	I1213 19:13:16.271661   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:16.271717   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.275571   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:16.275645   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:16.304819   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:16.304843   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.304848   92925 cri.go:89] found id: ""
	I1213 19:13:16.304856   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:16.304911   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.308802   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.312668   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:16.312741   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:16.347113   92925 cri.go:89] found id: ""
	I1213 19:13:16.347137   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.347146   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:16.347153   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:16.347209   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:16.380339   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:16.380362   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:16.380368   92925 cri.go:89] found id: ""
	I1213 19:13:16.380376   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:16.380433   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.383986   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.387756   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:16.387876   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:16.419309   92925 cri.go:89] found id: ""
	I1213 19:13:16.419344   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.419353   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:16.419359   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:16.419427   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:16.447987   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:16.448019   92925 cri.go:89] found id: ""
	I1213 19:13:16.448028   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:16.448093   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:16.452467   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:16.452551   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:16.478206   92925 cri.go:89] found id: ""
	I1213 19:13:16.478271   92925 logs.go:282] 0 containers: []
	W1213 19:13:16.478298   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:16.478319   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:16.478361   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:16.505859   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:16.505891   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:16.547050   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:16.547085   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:16.591041   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:16.591074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:16.659418   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:16.659502   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:16.686174   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:16.686202   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:16.763753   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:16.763792   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:16.795967   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:16.795996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:16.909202   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:16.909246   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:16.921936   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:16.921962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:16.996415   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:16.987820    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.988740    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990501    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990844    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.992387    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:16.987820    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.988740    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990501    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.990844    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:16.992387    8228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:16.996438   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:16.996452   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:19.525182   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:19.536170   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:19.536246   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:19.563344   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:19.563368   92925 cri.go:89] found id: ""
	I1213 19:13:19.563377   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:19.563432   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.567191   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:19.567263   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:19.594906   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:19.594926   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:19.594936   92925 cri.go:89] found id: ""
	I1213 19:13:19.594944   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:19.595012   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.599420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.603163   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:19.603240   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:19.636656   92925 cri.go:89] found id: ""
	I1213 19:13:19.636681   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.636690   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:19.636696   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:19.636753   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:19.667204   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:19.667274   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:19.667292   92925 cri.go:89] found id: ""
	I1213 19:13:19.667316   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:19.667395   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.671184   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.674972   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:19.675041   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:19.704947   92925 cri.go:89] found id: ""
	I1213 19:13:19.704971   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.704980   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:19.704988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:19.705073   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:19.730669   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:19.730691   92925 cri.go:89] found id: ""
	I1213 19:13:19.730699   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:19.730771   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:19.735384   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:19.735477   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:19.760611   92925 cri.go:89] found id: ""
	I1213 19:13:19.760634   92925 logs.go:282] 0 containers: []
	W1213 19:13:19.760643   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:19.760669   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:19.760686   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:19.788592   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:19.788621   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:19.882694   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:19.882730   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:19.954514   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:19.946675    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.947253    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.948589    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.949210    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.950900    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:19.946675    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.947253    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.948589    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.949210    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:19.950900    8317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:19.954535   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:19.954550   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:19.980616   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:19.980694   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:20.035895   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:20.035930   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:20.104716   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:20.104768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:20.199665   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:20.199701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:20.234652   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:20.234680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:20.248416   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:20.248444   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:20.296588   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:20.296624   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:22.824017   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:22.838193   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:22.838267   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:22.874481   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:22.874503   92925 cri.go:89] found id: ""
	I1213 19:13:22.874512   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:22.874578   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.878378   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:22.878467   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:22.907053   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:22.907075   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:22.907079   92925 cri.go:89] found id: ""
	I1213 19:13:22.907086   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:22.907143   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.911144   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.914933   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:22.915007   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:22.942646   92925 cri.go:89] found id: ""
	I1213 19:13:22.942714   92925 logs.go:282] 0 containers: []
	W1213 19:13:22.942729   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:22.942736   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:22.942797   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:22.969713   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:22.969735   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:22.969740   92925 cri.go:89] found id: ""
	I1213 19:13:22.969748   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:22.969804   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.973708   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:22.977426   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:22.977514   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:23.007912   92925 cri.go:89] found id: ""
	I1213 19:13:23.007939   92925 logs.go:282] 0 containers: []
	W1213 19:13:23.007948   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:23.007955   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:23.008018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:23.040260   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:23.040284   92925 cri.go:89] found id: ""
	I1213 19:13:23.040293   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:23.040348   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:23.044273   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:23.044348   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:23.073414   92925 cri.go:89] found id: ""
	I1213 19:13:23.073445   92925 logs.go:282] 0 containers: []
	W1213 19:13:23.073454   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:23.073466   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:23.073478   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:23.147486   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:23.147526   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:23.180397   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:23.180426   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:23.262279   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:23.253482    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.254529    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.255324    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.256834    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.257439    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:23.253482    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.254529    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.255324    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.256834    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:23.257439    8457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:23.262302   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:23.262318   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:23.288912   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:23.288942   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:23.328328   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:23.328366   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:23.421984   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:23.422020   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:23.524961   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:23.524997   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:23.542790   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:23.542821   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:23.591486   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:23.591522   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:23.621748   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:23.621777   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.152673   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:26.164673   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:26.164740   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:26.192010   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:26.192031   92925 cri.go:89] found id: ""
	I1213 19:13:26.192040   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:26.192095   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.195849   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:26.195918   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:26.224593   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:26.224657   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:26.224677   92925 cri.go:89] found id: ""
	I1213 19:13:26.224702   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:26.224772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.228545   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.231970   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:26.232086   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:26.259044   92925 cri.go:89] found id: ""
	I1213 19:13:26.259066   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.259075   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:26.259080   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:26.259137   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:26.287771   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:26.287793   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:26.287798   92925 cri.go:89] found id: ""
	I1213 19:13:26.287805   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:26.287861   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.293156   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.296722   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:26.296805   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:26.323701   92925 cri.go:89] found id: ""
	I1213 19:13:26.323731   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.323746   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:26.323753   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:26.323820   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:26.350119   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.350137   92925 cri.go:89] found id: ""
	I1213 19:13:26.350145   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:26.350199   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:26.353849   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:26.353916   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:26.380009   92925 cri.go:89] found id: ""
	I1213 19:13:26.380035   92925 logs.go:282] 0 containers: []
	W1213 19:13:26.380044   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:26.380053   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:26.380065   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:26.438029   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:26.438062   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:26.475066   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:26.475096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:26.507857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:26.507887   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:26.521466   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:26.521493   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:26.565942   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:26.565983   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:26.634647   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:26.634680   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:26.662943   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:26.662972   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:26.737712   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:26.737749   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:26.840754   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:26.840792   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:26.911511   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:26.903881    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.904637    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906164    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906441    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.907906    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:26.903881    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.904637    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906164    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.906441    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:26.907906    8643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:26.911534   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:26.911547   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.438403   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:29.449664   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:29.449742   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:29.477323   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.477342   92925 cri.go:89] found id: ""
	I1213 19:13:29.477351   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:29.477405   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.480946   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:29.481052   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:29.515446   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:29.515469   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:29.515473   92925 cri.go:89] found id: ""
	I1213 19:13:29.515480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:29.515537   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.520209   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.523894   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:29.523994   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:29.550207   92925 cri.go:89] found id: ""
	I1213 19:13:29.550232   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.550242   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:29.550272   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:29.550349   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:29.576154   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:29.576177   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:29.576182   92925 cri.go:89] found id: ""
	I1213 19:13:29.576195   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:29.576267   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.580154   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.583801   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:29.583876   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:29.613771   92925 cri.go:89] found id: ""
	I1213 19:13:29.613795   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.613805   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:29.613810   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:29.613872   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:29.640080   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:29.640103   92925 cri.go:89] found id: ""
	I1213 19:13:29.640112   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:29.640167   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:29.643810   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:29.643883   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:29.674496   92925 cri.go:89] found id: ""
	I1213 19:13:29.674567   92925 logs.go:282] 0 containers: []
	W1213 19:13:29.674583   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:29.674592   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:29.674616   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:29.704354   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:29.704383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:29.760688   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:29.760724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:29.789616   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:29.789644   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:29.817300   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:29.817328   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:29.848838   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:29.848866   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:29.949492   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:29.949527   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:30.081487   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:30.081528   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:30.170948   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:30.170989   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:30.251666   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:30.251705   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:30.265404   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:30.265433   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:30.340984   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:30.332491    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.333283    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335347    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335760    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.337330    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:30.332491    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.333283    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335347    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.335760    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:30.337330    8789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:32.841244   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:32.851830   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:32.851904   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:32.878262   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:32.878282   92925 cri.go:89] found id: ""
	I1213 19:13:32.878290   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:32.878345   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.881794   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:32.881871   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:32.908784   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:32.908807   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:32.908812   92925 cri.go:89] found id: ""
	I1213 19:13:32.908819   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:32.908877   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.913113   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.916615   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:32.916713   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:32.945436   92925 cri.go:89] found id: ""
	I1213 19:13:32.945460   92925 logs.go:282] 0 containers: []
	W1213 19:13:32.945468   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:32.945474   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:32.945532   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:32.972389   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:32.972409   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:32.972414   92925 cri.go:89] found id: ""
	I1213 19:13:32.972421   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:32.972496   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.976105   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:32.979491   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:32.979558   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:33.013568   92925 cri.go:89] found id: ""
	I1213 19:13:33.013590   92925 logs.go:282] 0 containers: []
	W1213 19:13:33.013598   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:33.013604   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:33.013662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:33.041534   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:33.041557   92925 cri.go:89] found id: ""
	I1213 19:13:33.041566   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:33.041622   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:33.045294   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:33.045445   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:33.074126   92925 cri.go:89] found id: ""
	I1213 19:13:33.074196   92925 logs.go:282] 0 containers: []
	W1213 19:13:33.074224   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:33.074248   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:33.074274   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:33.108085   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:33.108112   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:33.196053   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:33.196096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:33.238729   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:33.238801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:33.334220   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:33.334258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:33.347401   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:33.347431   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:33.415328   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:33.415362   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:33.444593   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:33.444672   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:33.519042   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:33.509468    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.510273    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.511953    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.512620    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.513636    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:33.509468    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.510273    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.511953    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.512620    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:33.513636    8903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:33.519066   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:33.519078   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:33.546564   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:33.546593   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:33.588382   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:33.588418   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.135267   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:36.146588   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:36.146662   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:36.173719   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:36.173741   92925 cri.go:89] found id: ""
	I1213 19:13:36.173750   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:36.173821   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.177610   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:36.177680   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:36.204513   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:36.204536   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.204540   92925 cri.go:89] found id: ""
	I1213 19:13:36.204548   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:36.204602   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.208516   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.211831   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:36.211901   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:36.243167   92925 cri.go:89] found id: ""
	I1213 19:13:36.243194   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.243205   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:36.243211   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:36.243271   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:36.272787   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:36.272812   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:36.272817   92925 cri.go:89] found id: ""
	I1213 19:13:36.272825   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:36.272880   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.276627   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.280060   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:36.280182   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:36.309203   92925 cri.go:89] found id: ""
	I1213 19:13:36.309231   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.309242   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:36.309248   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:36.309310   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:36.342531   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:36.342554   92925 cri.go:89] found id: ""
	I1213 19:13:36.342563   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:36.342631   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:36.346318   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:36.346392   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:36.374406   92925 cri.go:89] found id: ""
	I1213 19:13:36.374442   92925 logs.go:282] 0 containers: []
	W1213 19:13:36.374467   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:36.374485   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:36.374497   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:36.474302   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:36.474340   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:36.557406   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:36.549415    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.550022    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551319    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551900    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.553579    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:36.549415    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.550022    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551319    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.551900    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:36.553579    9001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:36.557430   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:36.557443   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:36.583387   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:36.583415   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:36.623378   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:36.623413   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:36.666931   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:36.666964   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:36.696482   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:36.696513   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:36.730677   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:36.730708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:36.743357   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:36.743386   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:36.813864   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:36.813900   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:36.848686   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:36.848716   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:39.433464   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:39.444066   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:39.444136   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:39.471666   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:39.471686   92925 cri.go:89] found id: ""
	I1213 19:13:39.471693   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:39.471753   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.475549   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:39.475641   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:39.505541   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:39.505615   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:39.505645   92925 cri.go:89] found id: ""
	I1213 19:13:39.505667   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:39.505752   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.511310   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.515781   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:39.515898   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:39.545256   92925 cri.go:89] found id: ""
	I1213 19:13:39.545290   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.545300   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:39.545306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:39.545379   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:39.576057   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:39.576080   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:39.576085   92925 cri.go:89] found id: ""
	I1213 19:13:39.576092   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:39.576146   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.580177   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.584087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:39.584160   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:39.610819   92925 cri.go:89] found id: ""
	I1213 19:13:39.610843   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.610863   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:39.610871   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:39.610929   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:39.638458   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:39.638481   92925 cri.go:89] found id: ""
	I1213 19:13:39.638503   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:39.638564   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:39.642537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:39.642610   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:39.670872   92925 cri.go:89] found id: ""
	I1213 19:13:39.670951   92925 logs.go:282] 0 containers: []
	W1213 19:13:39.670975   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:39.670998   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:39.671043   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:39.774702   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:39.774743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:39.846826   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:39.837968    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.838545    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.840574    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.841359    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.842988    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:39.837968    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.838545    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.840574    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.841359    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:39.842988    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:39.846849   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:39.846862   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:39.892712   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:39.892743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:39.960690   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:39.960729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:40.022528   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:40.022560   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:40.107424   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:40.107461   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:40.149433   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:40.149472   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:40.162446   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:40.162479   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:40.191980   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:40.192009   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:40.239148   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:40.239228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:42.771936   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:42.782654   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:42.782726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:42.808850   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:42.808869   92925 cri.go:89] found id: ""
	I1213 19:13:42.808877   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:42.808938   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.812682   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:42.812753   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:42.840980   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:42.841072   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:42.841097   92925 cri.go:89] found id: ""
	I1213 19:13:42.841122   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:42.841210   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.844946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.848726   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:42.848811   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:42.888597   92925 cri.go:89] found id: ""
	I1213 19:13:42.888663   92925 logs.go:282] 0 containers: []
	W1213 19:13:42.888688   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:42.888707   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:42.888791   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:42.916253   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:42.916323   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:42.916341   92925 cri.go:89] found id: ""
	I1213 19:13:42.916364   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:42.916443   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.920031   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.923493   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:42.923565   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:42.950967   92925 cri.go:89] found id: ""
	I1213 19:13:42.950991   92925 logs.go:282] 0 containers: []
	W1213 19:13:42.950999   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:42.951005   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:42.951062   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:42.977861   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:42.977884   92925 cri.go:89] found id: ""
	I1213 19:13:42.977892   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:42.977946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:42.985150   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:42.985252   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:43.014767   92925 cri.go:89] found id: ""
	I1213 19:13:43.014794   92925 logs.go:282] 0 containers: []
	W1213 19:13:43.014803   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:43.014813   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:43.014826   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:43.089031   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:43.089070   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:43.152812   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:43.152840   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:43.253685   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:43.253720   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:43.268102   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:43.268130   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:43.342529   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:43.333442    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.333905    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.335923    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.336467    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.338397    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:43.333442    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.333905    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.335923    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.336467    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:43.338397    9294 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:43.342553   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:43.342566   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:43.383957   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:43.383996   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:43.431627   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:43.431662   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:43.504349   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:43.504386   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:43.541135   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:43.541167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:43.570288   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:43.570315   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.101243   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:46.114537   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:46.114605   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:46.142285   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:46.142310   92925 cri.go:89] found id: ""
	I1213 19:13:46.142319   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:46.142374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.146198   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:46.146275   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:46.172413   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:46.172485   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:46.172504   92925 cri.go:89] found id: ""
	I1213 19:13:46.172529   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:46.172649   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.176629   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.180398   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:46.180514   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:46.208892   92925 cri.go:89] found id: ""
	I1213 19:13:46.208925   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.208934   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:46.208942   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:46.209074   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:46.237365   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:46.237388   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:46.237394   92925 cri.go:89] found id: ""
	I1213 19:13:46.237401   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:46.237458   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.241815   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.245384   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:46.245482   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:46.272996   92925 cri.go:89] found id: ""
	I1213 19:13:46.273063   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.273072   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:46.273078   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:46.273160   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:46.302629   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.302654   92925 cri.go:89] found id: ""
	I1213 19:13:46.302663   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:46.302737   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:46.306762   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:46.306861   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:46.337280   92925 cri.go:89] found id: ""
	I1213 19:13:46.337346   92925 logs.go:282] 0 containers: []
	W1213 19:13:46.337369   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:46.337384   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:46.337395   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:46.349174   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:46.349204   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:46.419942   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:46.411077    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.411612    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413348    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413991    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.415827    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:46.411077    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.411612    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413348    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.413991    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:46.415827    9420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:46.419977   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:46.419993   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:46.446859   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:46.446885   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:46.487087   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:46.487124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:46.547232   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:46.547267   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:46.574826   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:46.574854   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:46.602584   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:46.602609   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:46.640086   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:46.640117   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:46.740777   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:46.740818   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:46.812315   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:46.812357   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:49.395199   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:49.405934   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:49.406009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:49.433789   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:49.433810   92925 cri.go:89] found id: ""
	I1213 19:13:49.433827   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:49.433883   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.437578   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:49.437651   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:49.471711   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:49.471734   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:49.471740   92925 cri.go:89] found id: ""
	I1213 19:13:49.471748   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:49.471801   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.475461   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.479094   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:49.479168   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:49.505391   92925 cri.go:89] found id: ""
	I1213 19:13:49.505417   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.505426   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:49.505433   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:49.505488   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:49.540863   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:49.540890   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:49.540895   92925 cri.go:89] found id: ""
	I1213 19:13:49.540903   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:49.540960   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.544771   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.548451   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:49.548524   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:49.575402   92925 cri.go:89] found id: ""
	I1213 19:13:49.575428   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.575436   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:49.575442   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:49.575501   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:49.605123   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:49.605143   92925 cri.go:89] found id: ""
	I1213 19:13:49.605151   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:49.605211   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:49.608919   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:49.609061   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:49.637050   92925 cri.go:89] found id: ""
	I1213 19:13:49.637075   92925 logs.go:282] 0 containers: []
	W1213 19:13:49.637084   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:49.637093   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:49.637105   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:49.744000   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:49.744048   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:49.811345   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:49.802050    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.802444    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805468    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805922    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.807507    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:49.802050    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.802444    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805468    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.805922    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:49.807507    9555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:49.811370   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:49.811384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:49.852043   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:49.852081   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:49.896314   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:49.896349   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:49.924211   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:49.924240   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:50.006219   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:50.006263   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:50.039895   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:50.039978   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:50.054629   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:50.054656   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:50.084937   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:50.084966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:50.159510   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:50.159553   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:52.688326   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:52.699486   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:52.699554   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:52.726195   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:52.726216   92925 cri.go:89] found id: ""
	I1213 19:13:52.726224   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:52.726280   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.730715   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:52.730785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:52.756911   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:52.756933   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:52.756938   92925 cri.go:89] found id: ""
	I1213 19:13:52.756946   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:52.757069   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.760788   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.764452   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:52.764551   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:52.790658   92925 cri.go:89] found id: ""
	I1213 19:13:52.790732   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.790749   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:52.790756   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:52.790816   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:52.818365   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:52.818388   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:52.818394   92925 cri.go:89] found id: ""
	I1213 19:13:52.818402   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:52.818477   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.822460   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.826054   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:52.826130   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:52.853218   92925 cri.go:89] found id: ""
	I1213 19:13:52.853245   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.853256   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:52.853262   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:52.853321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:52.879712   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:52.879736   92925 cri.go:89] found id: ""
	I1213 19:13:52.879744   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:52.879798   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:52.883563   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:52.883639   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:52.910499   92925 cri.go:89] found id: ""
	I1213 19:13:52.910526   92925 logs.go:282] 0 containers: []
	W1213 19:13:52.910535   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:52.910545   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:52.910577   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:52.990183   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:52.990219   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:53.026776   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:53.026805   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:53.118043   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:53.107629    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.110332    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.111160    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.112144    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.113182    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:53.107629    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.110332    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.111160    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.112144    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:53.113182    9708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:53.118090   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:53.118141   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:53.160995   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:53.161190   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:53.204763   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:53.204795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:53.270772   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:53.270810   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:53.370857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:53.370895   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:53.383046   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:53.383074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:53.410648   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:53.410684   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:53.439739   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:53.439768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:55.970243   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:55.981613   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:55.981689   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:56.018614   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:56.018637   92925 cri.go:89] found id: ""
	I1213 19:13:56.018647   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:56.018707   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.022914   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:56.022990   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:56.056158   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:56.056182   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:56.056187   92925 cri.go:89] found id: ""
	I1213 19:13:56.056194   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:56.056275   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.061504   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.065201   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:56.065281   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:56.094861   92925 cri.go:89] found id: ""
	I1213 19:13:56.094887   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.094896   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:56.094903   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:56.094982   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:56.133165   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:56.133240   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:56.133260   92925 cri.go:89] found id: ""
	I1213 19:13:56.133291   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:56.133356   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.137225   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.140713   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:56.140785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:56.168013   92925 cri.go:89] found id: ""
	I1213 19:13:56.168039   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.168048   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:56.168055   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:56.168118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:56.196793   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:56.196867   92925 cri.go:89] found id: ""
	I1213 19:13:56.196876   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:56.196935   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:56.200591   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:56.200672   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:56.227851   92925 cri.go:89] found id: ""
	I1213 19:13:56.227877   92925 logs.go:282] 0 containers: []
	W1213 19:13:56.227887   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:56.227896   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:56.227908   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:13:56.323380   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:56.323416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:56.337259   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:56.337289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:56.362908   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:56.362939   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:56.443333   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:56.443372   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:56.522467   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:56.511318    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.512215    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.514040    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.515835    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.516378    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:56.511318    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.512215    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.514040    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.515835    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:56.516378    9846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:56.522485   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:56.522498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:56.561809   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:56.561843   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:56.606943   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:56.606979   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:56.678268   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:56.678310   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:56.707280   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:56.707309   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:56.736890   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:56.736917   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:59.286954   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:13:59.298376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:13:59.298447   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:13:59.325376   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:59.325399   92925 cri.go:89] found id: ""
	I1213 19:13:59.325407   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:13:59.325464   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.329049   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:13:59.329123   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:13:59.356066   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:59.356085   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:59.356089   92925 cri.go:89] found id: ""
	I1213 19:13:59.356097   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:13:59.356150   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.360113   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.363660   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:13:59.363736   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:13:59.389568   92925 cri.go:89] found id: ""
	I1213 19:13:59.389594   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.389604   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:13:59.389611   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:13:59.389692   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:13:59.423243   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:59.423266   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:59.423270   92925 cri.go:89] found id: ""
	I1213 19:13:59.423278   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:13:59.423350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.426944   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.431770   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:13:59.431844   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:13:59.458103   92925 cri.go:89] found id: ""
	I1213 19:13:59.458173   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.458220   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:13:59.458246   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:13:59.458332   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:13:59.487250   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:59.487324   92925 cri.go:89] found id: ""
	I1213 19:13:59.487340   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:13:59.487406   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:13:59.491784   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:13:59.491852   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:13:59.525717   92925 cri.go:89] found id: ""
	I1213 19:13:59.525739   92925 logs.go:282] 0 containers: []
	W1213 19:13:59.525748   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:13:59.525756   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:13:59.525768   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:13:59.554063   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:13:59.554091   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:13:59.599874   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:13:59.599909   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:13:59.626733   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:13:59.626765   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:13:59.700778   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:13:59.700814   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:13:59.713358   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:13:59.713388   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:13:59.783137   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:13:59.774677   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.775356   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.776867   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.777580   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.778486   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:13:59.774677   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.775356   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.776867   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.777580   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:13:59.778486   10001 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:13:59.783158   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:13:59.783169   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:13:59.832218   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:13:59.832248   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:13:59.901253   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:13:59.901329   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:13:59.930678   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:13:59.930701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:13:59.962070   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:13:59.962099   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:02.744450   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:02.755514   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:02.755587   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:02.782984   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:02.783079   92925 cri.go:89] found id: ""
	I1213 19:14:02.783095   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:02.783157   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.787187   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:02.787262   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:02.814931   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:02.814954   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:02.814959   92925 cri.go:89] found id: ""
	I1213 19:14:02.814967   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:02.815031   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.818983   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.822788   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:02.822865   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:02.848942   92925 cri.go:89] found id: ""
	I1213 19:14:02.848966   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.848975   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:02.848991   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:02.849096   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:02.876134   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:02.876155   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:02.876160   92925 cri.go:89] found id: ""
	I1213 19:14:02.876168   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:02.876249   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.880576   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.885335   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:02.885459   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:02.913660   92925 cri.go:89] found id: ""
	I1213 19:14:02.913733   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.913763   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:02.913802   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:02.913924   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:02.940178   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:02.940248   92925 cri.go:89] found id: ""
	I1213 19:14:02.940270   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:02.940359   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:02.944376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:02.944500   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:02.975815   92925 cri.go:89] found id: ""
	I1213 19:14:02.975838   92925 logs.go:282] 0 containers: []
	W1213 19:14:02.975846   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:02.975855   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:02.975867   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:03.074688   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:03.074723   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:03.156277   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:03.147816   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.148501   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150174   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150777   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.152270   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:03.147816   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.148501   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150174   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.150777   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:03.152270   10113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:03.156299   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:03.156311   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:03.182450   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:03.182477   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:03.221147   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:03.221181   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:03.292920   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:03.292962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:03.323958   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:03.323983   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:03.397255   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:03.397289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:03.410296   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:03.410325   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:03.465930   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:03.465966   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:03.497989   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:03.498017   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:06.058798   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:06.069576   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:06.069643   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:06.097652   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:06.097675   92925 cri.go:89] found id: ""
	I1213 19:14:06.097684   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:06.097767   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.103860   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:06.103983   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:06.133321   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:06.133354   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:06.133359   92925 cri.go:89] found id: ""
	I1213 19:14:06.133367   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:06.133434   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.137349   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.140932   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:06.141036   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:06.174768   92925 cri.go:89] found id: ""
	I1213 19:14:06.174796   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.174806   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:06.174813   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:06.174923   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:06.202214   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:06.202245   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:06.202249   92925 cri.go:89] found id: ""
	I1213 19:14:06.202257   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:06.202315   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.206201   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.209869   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:06.209950   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:06.240738   92925 cri.go:89] found id: ""
	I1213 19:14:06.240762   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.240771   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:06.240777   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:06.240838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:06.267045   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:06.267067   92925 cri.go:89] found id: ""
	I1213 19:14:06.267076   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:06.267134   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:06.270950   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:06.271059   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:06.298538   92925 cri.go:89] found id: ""
	I1213 19:14:06.298566   92925 logs.go:282] 0 containers: []
	W1213 19:14:06.298576   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:06.298585   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:06.298600   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:06.401303   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:06.401348   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:06.414599   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:06.414631   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:06.441984   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:06.442056   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:06.481290   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:06.481321   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:06.541131   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:06.541162   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:06.614944   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:06.614978   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:06.700895   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:06.700937   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:06.734007   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:06.734036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:06.804578   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:06.795862   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.796443   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798255   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798765   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.800521   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:06.795862   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.796443   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798255   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.798765   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:06.800521   10300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:06.804604   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:06.804616   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:06.832247   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:06.832275   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.358770   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:09.369376   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:09.369446   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:09.397174   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:09.397250   92925 cri.go:89] found id: ""
	I1213 19:14:09.397268   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:09.397341   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.401282   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:09.401379   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:09.430806   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:09.430829   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:09.430834   92925 cri.go:89] found id: ""
	I1213 19:14:09.430842   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:09.430895   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.434593   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.437861   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:09.437931   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:09.462972   92925 cri.go:89] found id: ""
	I1213 19:14:09.463040   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.463067   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:09.463087   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:09.463154   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:09.489906   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:09.489930   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:09.489935   92925 cri.go:89] found id: ""
	I1213 19:14:09.489943   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:09.490000   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.493996   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.497780   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:09.497895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:09.529207   92925 cri.go:89] found id: ""
	I1213 19:14:09.529232   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.529241   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:09.529280   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:09.529364   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:09.556267   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.556289   92925 cri.go:89] found id: ""
	I1213 19:14:09.556297   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:09.556383   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:09.560687   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:09.560770   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:09.592345   92925 cri.go:89] found id: ""
	I1213 19:14:09.592380   92925 logs.go:282] 0 containers: []
	W1213 19:14:09.592389   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:09.592398   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:09.592410   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:09.604889   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:09.604917   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:09.631468   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:09.631498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:09.670679   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:09.670712   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:09.715815   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:09.715851   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:09.743494   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:09.743523   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:09.775725   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:09.775753   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:09.873965   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:09.874039   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:09.959605   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:09.948036   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.948708   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950229   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950803   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.952453   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:09.948036   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.948708   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950229   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.950803   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:09.952453   10435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:09.959680   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:09.959707   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:10.051190   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:10.051228   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:10.086712   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:10.086738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:12.672644   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:12.683960   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:12.684058   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:12.712689   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:12.712710   92925 cri.go:89] found id: ""
	I1213 19:14:12.712718   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:12.712772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.716732   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:12.716806   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:12.744449   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:12.744468   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:12.744473   92925 cri.go:89] found id: ""
	I1213 19:14:12.744480   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:12.744548   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.748558   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.752120   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:12.752195   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:12.779575   92925 cri.go:89] found id: ""
	I1213 19:14:12.779602   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.779611   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:12.779617   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:12.779677   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:12.808259   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:12.808279   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:12.808284   92925 cri.go:89] found id: ""
	I1213 19:14:12.808292   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:12.808348   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.812274   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.816250   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:12.816380   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:12.842528   92925 cri.go:89] found id: ""
	I1213 19:14:12.842556   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.842566   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:12.842572   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:12.842655   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:12.870846   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:12.870916   92925 cri.go:89] found id: ""
	I1213 19:14:12.870939   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:12.871003   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:12.874709   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:12.874809   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:12.901168   92925 cri.go:89] found id: ""
	I1213 19:14:12.901194   92925 logs.go:282] 0 containers: []
	W1213 19:14:12.901203   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:12.901212   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:12.901224   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:12.993856   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:12.993888   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:13.006289   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:13.006320   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:13.038515   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:13.038544   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:13.101746   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:13.101795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:13.153697   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:13.153736   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:13.183337   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:13.183366   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:13.262960   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:13.262995   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:13.297818   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:13.297845   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:13.368622   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:13.360485   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.361349   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363057   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363352   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.364843   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:13.360485   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.361349   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363057   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.363352   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:13.364843   10576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:13.368650   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:13.368664   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:13.439804   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:13.439843   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:15.976229   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:15.989077   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:15.989247   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:16.020054   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:16.020079   92925 cri.go:89] found id: ""
	I1213 19:14:16.020087   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:16.020158   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.024026   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:16.024118   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:16.051647   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:16.051670   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:16.051681   92925 cri.go:89] found id: ""
	I1213 19:14:16.051688   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:16.051772   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.055489   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.059115   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:16.059234   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:16.086414   92925 cri.go:89] found id: ""
	I1213 19:14:16.086438   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.086447   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:16.086453   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:16.086513   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:16.118349   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:16.118415   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:16.118434   92925 cri.go:89] found id: ""
	I1213 19:14:16.118458   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:16.118545   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.122398   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.129488   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:16.129561   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:16.156699   92925 cri.go:89] found id: ""
	I1213 19:14:16.156725   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.156734   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:16.156740   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:16.156799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:16.183419   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:16.183444   92925 cri.go:89] found id: ""
	I1213 19:14:16.183465   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:16.183520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:16.187500   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:16.187599   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:16.213532   92925 cri.go:89] found id: ""
	I1213 19:14:16.213610   92925 logs.go:282] 0 containers: []
	W1213 19:14:16.213634   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:16.213657   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:16.213703   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:16.225956   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:16.225985   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:16.299377   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:16.290117   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.291089   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.292835   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.293694   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.295412   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:16.290117   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.291089   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.292835   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.293694   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:16.295412   10665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:16.299401   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:16.299416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:16.327259   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:16.327288   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:16.353346   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:16.353376   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:16.380053   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:16.380079   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:16.415886   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:16.415918   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:16.512571   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:16.512605   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:16.557415   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:16.557451   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:16.616391   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:16.616424   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:16.692096   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:16.692131   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:19.277525   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:19.287988   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:19.288109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:19.314035   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:19.314055   92925 cri.go:89] found id: ""
	I1213 19:14:19.314064   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:19.314137   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.317785   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:19.317856   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:19.344128   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:19.344151   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:19.344155   92925 cri.go:89] found id: ""
	I1213 19:14:19.344163   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:19.344216   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.348619   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.351872   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:19.351961   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:19.377237   92925 cri.go:89] found id: ""
	I1213 19:14:19.377263   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.377272   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:19.377278   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:19.377360   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:19.404210   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:19.404233   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:19.404238   92925 cri.go:89] found id: ""
	I1213 19:14:19.404245   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:19.404318   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.407909   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.411268   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:19.411336   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:19.437051   92925 cri.go:89] found id: ""
	I1213 19:14:19.437075   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.437083   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:19.437089   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:19.437147   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:19.461816   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:19.461847   92925 cri.go:89] found id: ""
	I1213 19:14:19.461856   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:19.461911   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:19.465492   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:19.465587   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:19.491501   92925 cri.go:89] found id: ""
	I1213 19:14:19.491527   92925 logs.go:282] 0 containers: []
	W1213 19:14:19.491536   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:19.491545   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:19.491588   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:19.530624   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:19.530652   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:19.570388   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:19.570423   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:19.649601   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:19.649638   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:19.682548   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:19.682579   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:19.765347   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:19.765383   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:19.797401   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:19.797430   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:19.892983   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:19.893036   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:19.905252   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:19.905281   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:19.976038   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:19.968048   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.968518   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.969788   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.970473   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.972132   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:19.968048   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.968518   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.969788   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.970473   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:19.972132   10849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:19.976061   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:19.976074   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:20.015893   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:20.015932   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:22.580793   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:22.591726   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:22.591801   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:22.617941   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:22.617972   92925 cri.go:89] found id: ""
	I1213 19:14:22.617981   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:22.618039   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.621895   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:22.621967   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:22.648715   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:22.648778   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:22.648797   92925 cri.go:89] found id: ""
	I1213 19:14:22.648821   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:22.648904   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.653305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.657032   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:22.657104   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:22.686906   92925 cri.go:89] found id: ""
	I1213 19:14:22.686932   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.686946   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:22.686952   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:22.687013   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:22.714929   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:22.714951   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:22.714956   92925 cri.go:89] found id: ""
	I1213 19:14:22.714964   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:22.715025   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.719071   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.722714   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:22.722784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:22.750440   92925 cri.go:89] found id: ""
	I1213 19:14:22.750470   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.750480   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:22.750486   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:22.750549   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:22.777550   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:22.777572   92925 cri.go:89] found id: ""
	I1213 19:14:22.777580   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:22.777635   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:22.781380   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:22.781475   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:22.816511   92925 cri.go:89] found id: ""
	I1213 19:14:22.816537   92925 logs.go:282] 0 containers: []
	W1213 19:14:22.816547   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:22.816572   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:22.816617   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:22.842295   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:22.842322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:22.882060   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:22.882095   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:22.965336   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:22.965374   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:22.995696   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:22.995731   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:23.098694   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:23.098782   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:23.117712   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:23.117743   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:23.167456   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:23.167497   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:23.195171   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:23.195199   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:23.279228   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:23.279264   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:23.318709   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:23.318738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:23.384532   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:23.376056   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.376628   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.378283   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379367   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379806   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:23.376056   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.376628   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.378283   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379367   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:23.379806   11004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:25.885566   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:25.896623   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:25.896696   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:25.924503   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:25.924535   92925 cri.go:89] found id: ""
	I1213 19:14:25.924544   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:25.924601   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.928341   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:25.928413   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:25.966385   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:25.966404   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:25.966409   92925 cri.go:89] found id: ""
	I1213 19:14:25.966417   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:25.966471   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.970190   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:25.974101   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:25.974229   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:26.004380   92925 cri.go:89] found id: ""
	I1213 19:14:26.004456   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.004479   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:26.004498   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:26.004595   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:26.031828   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:26.031853   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:26.031860   92925 cri.go:89] found id: ""
	I1213 19:14:26.031868   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:26.031925   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.036387   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.040161   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:26.040235   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:26.070525   92925 cri.go:89] found id: ""
	I1213 19:14:26.070591   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.070616   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:26.070635   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:26.070724   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:26.108253   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:26.108277   92925 cri.go:89] found id: ""
	I1213 19:14:26.108294   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:26.108373   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:26.112191   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:26.112324   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:26.146018   92925 cri.go:89] found id: ""
	I1213 19:14:26.146042   92925 logs.go:282] 0 containers: []
	W1213 19:14:26.146052   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:26.146060   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:26.146094   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:26.187197   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:26.187229   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:26.232694   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:26.232724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:26.310398   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:26.310435   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:26.323748   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:26.323775   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:26.350662   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:26.350689   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:26.380636   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:26.380707   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:26.407064   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:26.407089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:26.483950   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:26.483984   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:26.536817   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:26.536846   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:26.654750   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:26.654801   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:26.733679   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:26.725319   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.726046   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.727714   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.728228   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.729870   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:26.725319   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.726046   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.727714   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.728228   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:26.729870   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:29.233968   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:29.244666   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:29.244746   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:29.272994   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:29.273043   92925 cri.go:89] found id: ""
	I1213 19:14:29.273051   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:29.273108   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.277950   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:29.278022   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:29.304315   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:29.304334   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:29.304338   92925 cri.go:89] found id: ""
	I1213 19:14:29.304346   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:29.304402   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.308379   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.311905   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:29.311974   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:29.337925   92925 cri.go:89] found id: ""
	I1213 19:14:29.337953   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.337962   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:29.337968   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:29.338028   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:29.365135   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:29.365156   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:29.365160   92925 cri.go:89] found id: ""
	I1213 19:14:29.365167   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:29.365222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.368867   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.372263   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:29.372334   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:29.403367   92925 cri.go:89] found id: ""
	I1213 19:14:29.403393   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.403402   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:29.403408   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:29.403466   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:29.429639   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:29.429703   92925 cri.go:89] found id: ""
	I1213 19:14:29.429718   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:29.429782   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:29.433301   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:29.433373   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:29.460244   92925 cri.go:89] found id: ""
	I1213 19:14:29.460272   92925 logs.go:282] 0 containers: []
	W1213 19:14:29.460282   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:29.460291   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:29.460302   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:29.555127   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:29.555166   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:29.583790   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:29.583827   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:29.646377   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:29.646409   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:29.720554   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:29.720592   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:29.751659   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:29.751686   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:29.788857   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:29.788883   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:29.800809   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:29.800844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:29.869250   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:29.862112   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.862682   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864146   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864555   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.865755   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:29.862112   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.862682   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864146   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.864555   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:29.865755   11259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:29.869274   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:29.869287   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:29.913688   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:29.913724   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:29.956382   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:29.956408   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:32.553678   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:32.565396   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:32.565470   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:32.592588   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:32.592613   92925 cri.go:89] found id: ""
	I1213 19:14:32.592622   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:32.592684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.596429   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:32.596509   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:32.624469   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:32.624493   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:32.624499   92925 cri.go:89] found id: ""
	I1213 19:14:32.624506   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:32.624559   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.628270   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.631873   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:32.632003   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:32.657120   92925 cri.go:89] found id: ""
	I1213 19:14:32.657144   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.657153   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:32.657159   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:32.657220   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:32.684878   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:32.684901   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:32.684906   92925 cri.go:89] found id: ""
	I1213 19:14:32.684914   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:32.684976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.689235   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.692754   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:32.692825   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:32.722855   92925 cri.go:89] found id: ""
	I1213 19:14:32.722878   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.722887   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:32.722893   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:32.722952   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:32.753685   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:32.753704   92925 cri.go:89] found id: ""
	I1213 19:14:32.753712   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:32.753764   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:32.758129   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:32.758214   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:32.784526   92925 cri.go:89] found id: ""
	I1213 19:14:32.784599   92925 logs.go:282] 0 containers: []
	W1213 19:14:32.784623   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:32.784645   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:32.784683   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:32.826015   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:32.826050   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:32.915444   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:32.915483   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:32.943132   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:32.943167   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:33.017904   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:33.017945   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:33.050228   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:33.050258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:33.122559   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:33.114436   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.115150   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.116863   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.117500   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.118980   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:33.114436   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.115150   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.116863   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.117500   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:33.118980   11385 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:33.122583   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:33.122597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:33.177421   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:33.177455   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:33.206989   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:33.207016   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:33.305130   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:33.305169   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:33.319318   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:33.319416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:35.847899   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:35.859028   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:35.859101   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:35.887722   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:35.887745   92925 cri.go:89] found id: ""
	I1213 19:14:35.887754   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:35.887807   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.891699   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:35.891771   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:35.920114   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:35.920138   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:35.920144   92925 cri.go:89] found id: ""
	I1213 19:14:35.920152   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:35.920222   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.923937   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.927605   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:35.927678   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:35.953980   92925 cri.go:89] found id: ""
	I1213 19:14:35.954007   92925 logs.go:282] 0 containers: []
	W1213 19:14:35.954016   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:35.954023   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:35.954080   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:35.980645   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:35.980665   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:35.980670   92925 cri.go:89] found id: ""
	I1213 19:14:35.980678   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:35.980742   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.991946   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:35.996641   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:35.996726   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:36.026202   92925 cri.go:89] found id: ""
	I1213 19:14:36.026228   92925 logs.go:282] 0 containers: []
	W1213 19:14:36.026238   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:36.026245   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:36.026350   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:36.051979   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:36.052001   92925 cri.go:89] found id: ""
	I1213 19:14:36.052010   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:36.052066   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:36.055868   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:36.055938   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:36.083649   92925 cri.go:89] found id: ""
	I1213 19:14:36.083675   92925 logs.go:282] 0 containers: []
	W1213 19:14:36.083685   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:36.083693   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:36.083704   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:36.164414   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:36.164464   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:36.198766   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:36.198793   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:36.298985   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:36.299028   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:36.346466   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:36.346498   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:36.376231   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:36.376258   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:36.403571   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:36.403597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:36.417684   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:36.417714   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:36.487562   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:36.479494   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.480246   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.481848   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.482211   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.483808   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:36.479494   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.480246   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.481848   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.482211   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:36.483808   11529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:36.487585   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:36.487597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:36.514488   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:36.514514   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:36.559954   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:36.559990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:39.133526   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:39.150754   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:39.150826   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:39.179295   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:39.179315   92925 cri.go:89] found id: ""
	I1213 19:14:39.179324   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:39.179380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.185538   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:39.185605   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:39.216427   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:39.216449   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:39.216454   92925 cri.go:89] found id: ""
	I1213 19:14:39.216462   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:39.216517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.221041   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.225622   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:39.225691   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:39.251922   92925 cri.go:89] found id: ""
	I1213 19:14:39.251946   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.251955   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:39.251961   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:39.252019   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:39.281875   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:39.281900   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:39.281905   92925 cri.go:89] found id: ""
	I1213 19:14:39.281912   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:39.281970   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.286420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.290568   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:39.290663   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:39.315894   92925 cri.go:89] found id: ""
	I1213 19:14:39.315996   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.316021   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:39.316041   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:39.316153   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:39.344960   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:39.344983   92925 cri.go:89] found id: ""
	I1213 19:14:39.344992   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:39.345091   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:39.348776   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:39.348847   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:39.378840   92925 cri.go:89] found id: ""
	I1213 19:14:39.378862   92925 logs.go:282] 0 containers: []
	W1213 19:14:39.378870   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:39.378879   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:39.378890   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:39.410058   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:39.410087   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:39.510110   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:39.510188   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:39.542821   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:39.542892   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:39.614365   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:39.605214   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.606127   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.607756   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.608303   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.610109   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:39.605214   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.606127   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.607756   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.608303   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:39.610109   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:39.614387   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:39.614403   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:39.656166   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:39.656199   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:39.700850   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:39.700887   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:39.735225   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:39.735267   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:39.765360   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:39.765396   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:39.856068   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:39.856115   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:39.883708   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:39.883738   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.458661   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:42.469945   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:42.470018   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:42.497805   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:42.497831   92925 cri.go:89] found id: ""
	I1213 19:14:42.497840   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:42.497898   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.502059   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:42.502128   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:42.534485   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:42.534509   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:42.534514   92925 cri.go:89] found id: ""
	I1213 19:14:42.534521   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:42.534578   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.539929   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.544534   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:42.544618   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:42.572959   92925 cri.go:89] found id: ""
	I1213 19:14:42.572983   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.572991   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:42.572998   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:42.573085   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:42.605231   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.605253   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:42.605257   92925 cri.go:89] found id: ""
	I1213 19:14:42.605265   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:42.605324   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.609379   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.613098   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:42.613183   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:42.641856   92925 cri.go:89] found id: ""
	I1213 19:14:42.641881   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.641890   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:42.641897   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:42.641956   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:42.670835   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:42.670862   92925 cri.go:89] found id: ""
	I1213 19:14:42.670870   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:42.670923   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:42.674669   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:42.674780   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:42.701820   92925 cri.go:89] found id: ""
	I1213 19:14:42.701886   92925 logs.go:282] 0 containers: []
	W1213 19:14:42.701912   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:42.701935   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:42.701974   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:42.795111   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:42.795148   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:42.843272   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:42.843308   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:42.918660   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:42.918701   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:42.953437   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:42.953470   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:42.980705   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:42.980735   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:43.075228   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:43.075266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:43.089833   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:43.089865   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:43.165554   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:43.156189   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.157143   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.158950   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.160521   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.161743   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:43.156189   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.157143   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.158950   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.160521   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:43.161743   11802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:43.165619   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:43.165648   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:43.195772   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:43.195850   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:43.266745   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:43.266781   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:45.800090   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:45.811228   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:45.811319   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:45.844476   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:45.844562   92925 cri.go:89] found id: ""
	I1213 19:14:45.844585   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:45.844658   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.848635   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:45.848730   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:45.878507   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:45.878532   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:45.878537   92925 cri.go:89] found id: ""
	I1213 19:14:45.878545   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:45.878626   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.883362   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.887015   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:45.887090   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:45.922472   92925 cri.go:89] found id: ""
	I1213 19:14:45.922495   92925 logs.go:282] 0 containers: []
	W1213 19:14:45.922504   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:45.922510   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:45.922571   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:45.961736   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:45.961766   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:45.961772   92925 cri.go:89] found id: ""
	I1213 19:14:45.961779   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:45.961846   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.965883   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:45.969985   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:45.970062   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:46.005121   92925 cri.go:89] found id: ""
	I1213 19:14:46.005143   92925 logs.go:282] 0 containers: []
	W1213 19:14:46.005153   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:46.005159   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:46.005218   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:46.033851   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:46.033871   92925 cri.go:89] found id: ""
	I1213 19:14:46.033878   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:46.033932   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:46.037737   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:46.037813   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:46.064426   92925 cri.go:89] found id: ""
	I1213 19:14:46.064493   92925 logs.go:282] 0 containers: []
	W1213 19:14:46.064517   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:46.064541   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:46.064580   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:46.162246   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:46.162285   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:46.175470   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:46.175500   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:46.249273   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:46.239319   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.240280   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242150   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242816   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.244382   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:46.239319   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.240280   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242150   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.242816   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:46.244382   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:46.249333   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:46.249347   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:46.277985   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:46.278016   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:46.332032   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:46.332065   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:46.376410   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:46.376446   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:46.455695   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:46.455772   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:46.485453   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:46.485479   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:46.522886   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:46.522916   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:46.601217   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:46.601253   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:49.142956   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:49.157230   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:49.157309   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:49.185733   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:49.185767   92925 cri.go:89] found id: ""
	I1213 19:14:49.185775   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:49.185830   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.190180   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:49.190249   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:49.218248   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:49.218271   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:49.218276   92925 cri.go:89] found id: ""
	I1213 19:14:49.218285   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:49.218343   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.222331   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.226027   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:49.226107   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:49.258473   92925 cri.go:89] found id: ""
	I1213 19:14:49.258496   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.258504   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:49.258512   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:49.258570   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:49.285496   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:49.285560   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:49.285578   92925 cri.go:89] found id: ""
	I1213 19:14:49.285601   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:49.285684   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.291508   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.296197   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:49.296358   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:49.325094   92925 cri.go:89] found id: ""
	I1213 19:14:49.325119   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.325127   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:49.325134   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:49.325193   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:49.350750   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:49.350777   92925 cri.go:89] found id: ""
	I1213 19:14:49.350794   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:49.350857   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:49.354789   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:49.354915   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:49.381275   92925 cri.go:89] found id: ""
	I1213 19:14:49.381302   92925 logs.go:282] 0 containers: []
	W1213 19:14:49.381311   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:49.381320   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:49.381331   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:49.473722   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:49.473760   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:49.486016   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:49.486083   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:49.523030   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:49.523060   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:49.602664   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:49.602699   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:49.685307   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:49.685343   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:49.720678   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:49.720706   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:49.787762   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:49.779084   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.779733   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.781504   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.782055   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.783675   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:49.779084   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.779733   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.781504   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.782055   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:49.783675   12069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:49.787782   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:49.787795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:49.826153   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:49.826188   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:49.871719   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:49.871752   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:49.902768   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:49.902858   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:52.432900   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:52.443527   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:52.443639   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:52.470204   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:52.470237   92925 cri.go:89] found id: ""
	I1213 19:14:52.470247   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:52.470302   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.473971   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:52.474058   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:52.501963   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:52.501983   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:52.501987   92925 cri.go:89] found id: ""
	I1213 19:14:52.501994   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:52.502048   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.505744   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.509295   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:52.509368   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:52.534850   92925 cri.go:89] found id: ""
	I1213 19:14:52.534917   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.534943   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:52.534959   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:52.535033   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:52.570973   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:52.571045   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:52.571066   92925 cri.go:89] found id: ""
	I1213 19:14:52.571086   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:52.571156   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.574824   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.578317   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:52.578384   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:52.606849   92925 cri.go:89] found id: ""
	I1213 19:14:52.606873   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.606882   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:52.606888   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:52.606945   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:52.633073   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:52.633095   92925 cri.go:89] found id: ""
	I1213 19:14:52.633103   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:52.633169   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:52.636819   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:52.636895   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:52.663310   92925 cri.go:89] found id: ""
	I1213 19:14:52.663333   92925 logs.go:282] 0 containers: []
	W1213 19:14:52.663342   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:52.663350   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:52.663363   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:52.732904   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:52.724948   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.725610   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727167   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727671   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.729366   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:52.724948   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.725610   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727167   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.727671   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:52.729366   12171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:52.732929   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:52.732943   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:52.771098   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:52.771129   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:52.846025   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:52.846063   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:52.888075   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:52.888104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:52.992414   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:52.992452   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:53.007058   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:53.007089   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:53.034812   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:53.034841   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:53.078790   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:53.078828   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:53.134673   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:53.134708   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:53.162943   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:53.162969   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:55.740743   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:55.751731   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:55.751816   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:55.779888   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:55.779908   92925 cri.go:89] found id: ""
	I1213 19:14:55.779916   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:55.779976   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.783761   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:55.783831   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:55.810156   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:55.810175   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:55.810185   92925 cri.go:89] found id: ""
	I1213 19:14:55.810192   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:55.810252   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.814013   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.817577   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:55.817649   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:55.843468   92925 cri.go:89] found id: ""
	I1213 19:14:55.843491   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.843499   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:55.843505   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:55.843561   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:55.870048   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:55.870081   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:55.870093   92925 cri.go:89] found id: ""
	I1213 19:14:55.870100   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:55.870158   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.874026   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.877764   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:55.877852   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:55.907873   92925 cri.go:89] found id: ""
	I1213 19:14:55.907900   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.907909   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:55.907915   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:55.907976   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:55.934710   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:55.934732   92925 cri.go:89] found id: ""
	I1213 19:14:55.934740   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:55.934795   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:55.938598   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:55.938671   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:55.968271   92925 cri.go:89] found id: ""
	I1213 19:14:55.968337   92925 logs.go:282] 0 containers: []
	W1213 19:14:55.968361   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:55.968387   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:55.968416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:56.002213   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:56.002285   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:56.029658   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:56.029741   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:56.125956   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:56.126039   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:56.139465   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:56.139492   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:56.191699   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:56.191735   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:56.278131   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:56.278179   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:56.314251   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:56.314283   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:56.383224   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:56.373948   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.374799   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.376672   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.377083   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.378823   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:56.373948   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.374799   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.376672   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.377083   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:56.378823   12354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:56.383248   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:56.383261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:56.410961   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:56.410990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:56.450595   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:56.450633   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.032642   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:14:59.043619   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:14:59.043712   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:14:59.070836   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:59.070859   92925 cri.go:89] found id: ""
	I1213 19:14:59.070867   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:14:59.070934   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.074933   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:14:59.075009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:14:59.112290   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:59.112313   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:59.112318   92925 cri.go:89] found id: ""
	I1213 19:14:59.112325   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:14:59.112380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.117374   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.121073   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:14:59.121166   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:14:59.159645   92925 cri.go:89] found id: ""
	I1213 19:14:59.159714   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.159741   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:14:59.159763   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:14:59.159838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:14:59.193406   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.193430   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:14:59.193435   92925 cri.go:89] found id: ""
	I1213 19:14:59.193443   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:14:59.193524   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.197329   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.201001   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:14:59.201109   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:14:59.227682   92925 cri.go:89] found id: ""
	I1213 19:14:59.227706   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.227715   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:14:59.227721   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:14:59.227784   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:14:59.254466   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:59.254497   92925 cri.go:89] found id: ""
	I1213 19:14:59.254505   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:14:59.254561   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:14:59.258458   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:14:59.258530   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:14:59.285792   92925 cri.go:89] found id: ""
	I1213 19:14:59.285817   92925 logs.go:282] 0 containers: []
	W1213 19:14:59.285826   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:14:59.285835   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:14:59.285851   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:14:59.312955   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:14:59.312990   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:14:59.394158   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:14:59.394195   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:14:59.439055   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:14:59.439084   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:14:59.452200   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:14:59.452253   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:14:59.543624   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:14:59.535183   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.536016   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.537681   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.538269   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.539987   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:14:59.535183   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.536016   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.537681   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.538269   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:14:59.539987   12473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:14:59.543645   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:14:59.543659   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:14:59.571506   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:14:59.571533   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:14:59.615595   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:14:59.615634   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:14:59.717216   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:14:59.717256   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:14:59.764205   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:14:59.764243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:14:59.840500   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:14:59.840538   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.367252   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:02.379179   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:02.379252   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:02.407368   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:02.407394   92925 cri.go:89] found id: ""
	I1213 19:15:02.407402   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:02.407464   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.411245   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:02.411321   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:02.439707   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:02.439727   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:02.439732   92925 cri.go:89] found id: ""
	I1213 19:15:02.439739   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:02.439793   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.443520   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.447838   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:02.447965   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:02.475049   92925 cri.go:89] found id: ""
	I1213 19:15:02.475077   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.475086   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:02.475093   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:02.475153   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:02.509558   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:02.509582   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.509587   92925 cri.go:89] found id: ""
	I1213 19:15:02.509595   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:02.509652   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.513964   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.519816   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:02.519888   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:02.549572   92925 cri.go:89] found id: ""
	I1213 19:15:02.549639   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.549653   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:02.549660   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:02.549720   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:02.578189   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:02.578215   92925 cri.go:89] found id: ""
	I1213 19:15:02.578224   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:02.578287   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:02.582094   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:02.582166   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:02.609748   92925 cri.go:89] found id: ""
	I1213 19:15:02.609774   92925 logs.go:282] 0 containers: []
	W1213 19:15:02.609783   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:02.609792   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:02.609823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:02.660274   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:02.660313   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:02.737557   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:02.737590   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:02.821155   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:02.821193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:02.853468   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:02.853501   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:02.866631   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:02.866661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:02.895294   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:02.895323   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:02.940697   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:02.940734   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:02.970055   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:02.970088   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:03.002379   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:03.002409   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:03.096355   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:03.096390   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:03.189863   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:03.181408   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.182165   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.183899   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.184754   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.186389   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:03.181408   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.182165   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.183899   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.184754   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:03.186389   12656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:05.690514   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:05.702677   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:05.702772   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:05.730136   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:05.730160   92925 cri.go:89] found id: ""
	I1213 19:15:05.730169   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:05.730226   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.733966   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:05.734047   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:05.761337   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:05.761404   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:05.761425   92925 cri.go:89] found id: ""
	I1213 19:15:05.761450   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:05.761534   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.766511   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.770470   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:05.770545   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:05.803220   92925 cri.go:89] found id: ""
	I1213 19:15:05.803284   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.803300   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:05.803306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:05.803383   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:05.831772   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:05.831797   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:05.831803   92925 cri.go:89] found id: ""
	I1213 19:15:05.831810   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:05.831869   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.835814   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.839281   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:05.839351   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:05.870011   92925 cri.go:89] found id: ""
	I1213 19:15:05.870038   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.870059   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:05.870065   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:05.870126   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:05.898850   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:05.898877   92925 cri.go:89] found id: ""
	I1213 19:15:05.898888   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:05.898943   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:05.903063   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:05.903177   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:05.930061   92925 cri.go:89] found id: ""
	I1213 19:15:05.930126   92925 logs.go:282] 0 containers: []
	W1213 19:15:05.930140   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:05.930150   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:05.930164   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:05.943518   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:05.943549   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:05.973699   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:05.973729   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:06.024591   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:06.024622   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:06.131997   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:06.132041   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:06.202110   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:06.193932   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.195174   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.196901   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.197593   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.198598   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:06.193932   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.195174   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.196901   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.197593   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:06.198598   12751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:06.202133   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:06.202145   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:06.241491   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:06.241525   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:06.289002   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:06.289076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:06.376385   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:06.376422   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:06.406893   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:06.406920   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:06.438586   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:06.438615   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:09.021141   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:09.032497   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:09.032597   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:09.061840   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:09.061871   92925 cri.go:89] found id: ""
	I1213 19:15:09.061881   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:09.061939   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.065632   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:09.065706   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:09.094419   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:09.094444   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:09.094449   92925 cri.go:89] found id: ""
	I1213 19:15:09.094456   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:09.094517   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.098305   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.108354   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:09.108432   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:09.137672   92925 cri.go:89] found id: ""
	I1213 19:15:09.137706   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.137716   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:09.137722   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:09.137785   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:09.170831   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:09.170854   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:09.170859   92925 cri.go:89] found id: ""
	I1213 19:15:09.170866   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:09.170929   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.174672   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.177949   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:09.178023   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:09.208255   92925 cri.go:89] found id: ""
	I1213 19:15:09.208282   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.208291   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:09.208297   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:09.208352   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:09.234350   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:09.234373   92925 cri.go:89] found id: ""
	I1213 19:15:09.234381   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:09.234453   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:09.238030   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:09.238102   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:09.264310   92925 cri.go:89] found id: ""
	I1213 19:15:09.264335   92925 logs.go:282] 0 containers: []
	W1213 19:15:09.264344   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:09.264352   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:09.264365   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:09.295245   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:09.295276   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:09.369835   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:09.369869   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:09.472350   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:09.472384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:09.500555   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:09.500589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:09.535996   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:09.536032   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:09.552067   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:09.552096   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:09.624766   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:09.616285   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.617238   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.618950   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.619348   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.620912   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:09.616285   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.617238   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.618950   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.619348   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:09.620912   12896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:09.624810   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:09.624823   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:09.654769   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:09.654796   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:09.695636   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:09.695711   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:09.740840   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:09.740873   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.330150   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:12.341327   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:12.341430   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:12.373666   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:12.373692   92925 cri.go:89] found id: ""
	I1213 19:15:12.373699   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:12.373760   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.377493   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:12.377563   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:12.407860   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:12.407882   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:12.407886   92925 cri.go:89] found id: ""
	I1213 19:15:12.407897   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:12.407965   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.411939   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.416613   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:12.416687   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:12.447044   92925 cri.go:89] found id: ""
	I1213 19:15:12.447071   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.447080   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:12.447086   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:12.447149   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:12.474565   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.474599   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:12.474604   92925 cri.go:89] found id: ""
	I1213 19:15:12.474612   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:12.474669   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.478501   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.482327   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:12.482425   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:12.519207   92925 cri.go:89] found id: ""
	I1213 19:15:12.519235   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.519245   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:12.519252   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:12.519330   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:12.548236   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:12.548259   92925 cri.go:89] found id: ""
	I1213 19:15:12.548269   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:12.548334   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:12.552167   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:12.552292   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:12.581061   92925 cri.go:89] found id: ""
	I1213 19:15:12.581086   92925 logs.go:282] 0 containers: []
	W1213 19:15:12.581094   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:12.581103   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:12.581115   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:12.626762   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:12.626795   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:12.676771   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:12.676803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:12.708623   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:12.708661   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:12.735332   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:12.735361   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:12.830566   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:12.830606   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:12.858035   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:12.858107   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:12.953406   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:12.953445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:13.037585   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:13.037626   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:13.070076   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:13.070108   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:13.083239   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:13.083266   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:13.171369   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:13.163050   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.163831   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.165471   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.166105   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.167624   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:13.163050   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.163831   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.165471   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.166105   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:13.167624   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:15.672265   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:15.683518   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:15.683589   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:15.713736   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:15.713764   92925 cri.go:89] found id: ""
	I1213 19:15:15.713773   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:15.713845   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.718041   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:15.718116   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:15.745439   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:15.745462   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:15.745467   92925 cri.go:89] found id: ""
	I1213 19:15:15.745475   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:15.745555   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.749679   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.753271   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:15.753343   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:15.780766   92925 cri.go:89] found id: ""
	I1213 19:15:15.780791   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.780800   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:15.780806   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:15.780867   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:15.809433   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:15.809453   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:15.809458   92925 cri.go:89] found id: ""
	I1213 19:15:15.809466   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:15.809521   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.813350   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.816829   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:15.816899   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:15.843466   92925 cri.go:89] found id: ""
	I1213 19:15:15.843491   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.843501   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:15.843507   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:15.843566   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:15.869979   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:15.870003   92925 cri.go:89] found id: ""
	I1213 19:15:15.870012   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:15.870069   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:15.873941   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:15.874036   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:15.906204   92925 cri.go:89] found id: ""
	I1213 19:15:15.906268   92925 logs.go:282] 0 containers: []
	W1213 19:15:15.906283   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:15.906293   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:15.906305   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:16.002221   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:16.002261   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:16.030993   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:16.031024   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:16.078933   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:16.078967   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:16.173955   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:16.174010   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:16.207960   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:16.207989   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:16.221095   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:16.221124   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:16.290865   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:16.280288   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.281366   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.282142   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.283740   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.284314   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:16.280288   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.281366   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.282142   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.283740   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:16.284314   13166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:16.290940   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:16.290969   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:16.330431   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:16.330462   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:16.403747   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:16.403785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:16.435000   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:16.435076   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:18.967118   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:18.978473   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:18.978548   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:19.009416   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:19.009442   92925 cri.go:89] found id: ""
	I1213 19:15:19.009450   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:19.009506   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.013229   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:19.013304   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:19.046195   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:19.046217   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:19.046221   92925 cri.go:89] found id: ""
	I1213 19:15:19.046228   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:19.046284   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.050380   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.055287   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:19.055364   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:19.084697   92925 cri.go:89] found id: ""
	I1213 19:15:19.084724   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.084734   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:19.084740   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:19.084799   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:19.134188   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:19.134212   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:19.134217   92925 cri.go:89] found id: ""
	I1213 19:15:19.134225   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:19.134281   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.139452   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.143380   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:19.143515   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:19.176707   92925 cri.go:89] found id: ""
	I1213 19:15:19.176733   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.176742   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:19.176748   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:19.176808   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:19.205658   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:19.205681   92925 cri.go:89] found id: ""
	I1213 19:15:19.205689   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:19.205769   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:19.209480   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:19.209556   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:19.236187   92925 cri.go:89] found id: ""
	I1213 19:15:19.236210   92925 logs.go:282] 0 containers: []
	W1213 19:15:19.236219   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:19.236227   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:19.236239   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:19.335347   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:19.335384   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:19.347594   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:19.347622   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:19.423749   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:19.415662   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.416536   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418222   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418572   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.420106   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:19.415662   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.416536   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418222   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.418572   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:19.420106   13274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:19.423773   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:19.423785   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:19.458293   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:19.458322   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:19.491891   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:19.491981   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:19.532203   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:19.532289   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:19.572383   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:19.572416   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:19.623843   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:19.623878   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:19.701590   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:19.701669   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:19.730646   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:19.730674   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:22.313136   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:22.324070   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:22.324192   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:22.354911   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:22.354936   92925 cri.go:89] found id: ""
	I1213 19:15:22.354944   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:22.355017   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.359138   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:22.359232   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:22.387533   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:22.387553   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:22.387559   92925 cri.go:89] found id: ""
	I1213 19:15:22.387567   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:22.387622   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.391451   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.395283   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:22.395396   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:22.424307   92925 cri.go:89] found id: ""
	I1213 19:15:22.424330   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.424338   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:22.424345   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:22.424406   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:22.453085   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:22.453146   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:22.453167   92925 cri.go:89] found id: ""
	I1213 19:15:22.453192   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:22.453265   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.457420   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.461164   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:22.461238   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:22.491907   92925 cri.go:89] found id: ""
	I1213 19:15:22.491930   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.491939   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:22.491944   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:22.492029   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:22.527521   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:22.527588   92925 cri.go:89] found id: ""
	I1213 19:15:22.527615   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:22.527710   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:22.531946   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:22.532027   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:22.559453   92925 cri.go:89] found id: ""
	I1213 19:15:22.559480   92925 logs.go:282] 0 containers: []
	W1213 19:15:22.559499   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:22.559510   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:22.559522   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:22.601772   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:22.601808   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:22.649158   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:22.649193   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:22.676639   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:22.676667   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:22.777850   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:22.777888   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:22.851444   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:22.842501   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.843358   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.845491   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.846536   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.847439   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:22.842501   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.843358   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.845491   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.846536   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:22.847439   13428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:22.851468   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:22.851480   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:22.933320   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:22.933358   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:22.962559   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:22.962589   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:23.059725   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:23.059803   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:23.109255   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:23.109286   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:23.122814   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:23.122844   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:25.651780   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:25.662957   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:25.663032   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:25.696971   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:25.696993   92925 cri.go:89] found id: ""
	I1213 19:15:25.697001   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:25.697087   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.701838   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:25.701919   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:25.738295   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:25.738373   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:25.738386   92925 cri.go:89] found id: ""
	I1213 19:15:25.738395   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:25.738459   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.742364   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.746297   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:25.746400   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:25.772105   92925 cri.go:89] found id: ""
	I1213 19:15:25.772178   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.772201   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:25.772221   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:25.772305   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:25.799458   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:25.799526   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:25.799546   92925 cri.go:89] found id: ""
	I1213 19:15:25.799570   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:25.799645   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.803647   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.807583   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:25.807695   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:25.834975   92925 cri.go:89] found id: ""
	I1213 19:15:25.835051   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.835066   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:25.835073   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:25.835133   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:25.864722   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:25.864769   92925 cri.go:89] found id: ""
	I1213 19:15:25.864778   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:25.864836   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:25.868764   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:25.868838   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:25.897111   92925 cri.go:89] found id: ""
	I1213 19:15:25.897133   92925 logs.go:282] 0 containers: []
	W1213 19:15:25.897141   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:25.897162   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:25.897174   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:26.007072   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:26.007104   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:26.025166   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:26.025201   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:26.111354   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:26.097401   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.097781   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105030   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105458   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.107065   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:26.097401   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.097781   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105030   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.105458   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:26.107065   13546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:26.111374   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:26.111387   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:26.141476   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:26.141507   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:26.169374   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:26.169404   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:26.246093   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:26.246133   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:26.297802   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:26.297829   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:26.325154   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:26.325182   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:26.368489   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:26.368524   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:26.414072   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:26.414110   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.001164   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:29.013204   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:29.013272   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:29.047888   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:29.047909   92925 cri.go:89] found id: ""
	I1213 19:15:29.047918   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:29.047982   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.051890   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:29.051971   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:29.077464   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:29.077486   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:29.077490   92925 cri.go:89] found id: ""
	I1213 19:15:29.077498   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:29.077553   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.081462   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.084988   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:29.085157   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:29.115595   92925 cri.go:89] found id: ""
	I1213 19:15:29.115621   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.115631   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:29.115637   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:29.115697   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:29.160656   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.160729   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:29.160748   92925 cri.go:89] found id: ""
	I1213 19:15:29.160772   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:29.160853   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.165160   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.168775   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:29.168891   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:29.199867   92925 cri.go:89] found id: ""
	I1213 19:15:29.199890   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.199899   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:29.199911   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:29.200009   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:29.226478   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:29.226502   92925 cri.go:89] found id: ""
	I1213 19:15:29.226511   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:29.226565   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:29.230306   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:29.230382   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:29.260973   92925 cri.go:89] found id: ""
	I1213 19:15:29.260999   92925 logs.go:282] 0 containers: []
	W1213 19:15:29.261034   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:29.261044   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:29.261060   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:29.288533   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:29.288560   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:29.317072   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:29.317145   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:29.343899   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:29.343926   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:29.424466   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:29.424502   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:29.437265   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:29.437314   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:29.525751   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:29.505457   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.506350   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.518441   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.520261   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.521214   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:29.505457   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.506350   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.518441   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.520261   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:29.521214   13707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:29.525774   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:29.525787   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:29.565912   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:29.565947   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:29.614921   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:29.614962   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:29.695191   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:29.695229   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:29.726876   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:29.726907   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:32.331342   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:32.342123   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:32.342193   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:32.377492   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:32.377512   92925 cri.go:89] found id: ""
	I1213 19:15:32.377520   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:32.377603   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.381461   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:32.381535   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:32.408828   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:32.408849   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:32.408853   92925 cri.go:89] found id: ""
	I1213 19:15:32.408861   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:32.408913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.412666   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.416683   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:32.416757   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:32.444710   92925 cri.go:89] found id: ""
	I1213 19:15:32.444734   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.444744   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:32.444750   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:32.444842   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:32.470813   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:32.470834   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:32.470839   92925 cri.go:89] found id: ""
	I1213 19:15:32.470846   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:32.470904   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.474746   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.478110   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:32.478180   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:32.505590   92925 cri.go:89] found id: ""
	I1213 19:15:32.505616   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.505625   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:32.505630   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:32.505685   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:32.534851   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:32.534873   92925 cri.go:89] found id: ""
	I1213 19:15:32.534882   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:32.534942   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:32.538913   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:32.539005   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:32.570980   92925 cri.go:89] found id: ""
	I1213 19:15:32.571020   92925 logs.go:282] 0 containers: []
	W1213 19:15:32.571029   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:32.571055   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:32.571075   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:32.672697   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:32.672739   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:32.685325   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:32.685360   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:32.762805   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:32.754695   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.755445   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.756898   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.757344   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.759247   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:32.754695   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.755445   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.756898   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.757344   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:32.759247   13818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:32.762877   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:32.762899   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:32.788216   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:32.788243   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:32.831764   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:32.831797   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:32.861451   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:32.861481   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:32.889040   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:32.889113   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:32.962682   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:32.962721   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:33.005926   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:33.005963   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:33.113066   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:33.113100   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:35.646466   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:35.657328   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:15:35.657400   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:15:35.682772   92925 cri.go:89] found id: "667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:35.682796   92925 cri.go:89] found id: ""
	I1213 19:15:35.682805   92925 logs.go:282] 1 containers: [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e]
	I1213 19:15:35.682862   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.686943   92925 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:15:35.687017   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:15:35.713394   92925 cri.go:89] found id: "808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:35.713426   92925 cri.go:89] found id: "c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:35.713433   92925 cri.go:89] found id: ""
	I1213 19:15:35.713440   92925 logs.go:282] 2 containers: [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f]
	I1213 19:15:35.713492   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.717236   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.720957   92925 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:15:35.721060   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:15:35.747062   92925 cri.go:89] found id: ""
	I1213 19:15:35.747139   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.747155   92925 logs.go:284] No container was found matching "coredns"
	I1213 19:15:35.747162   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:15:35.747223   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:15:35.780788   92925 cri.go:89] found id: "fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:35.780809   92925 cri.go:89] found id: "bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:35.780814   92925 cri.go:89] found id: ""
	I1213 19:15:35.780822   92925 logs.go:282] 2 containers: [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43]
	I1213 19:15:35.780877   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.784913   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.788950   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:15:35.789084   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:15:35.817183   92925 cri.go:89] found id: ""
	I1213 19:15:35.817206   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.817217   92925 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:15:35.817223   92925 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:15:35.817285   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:15:35.844649   92925 cri.go:89] found id: "5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:35.844674   92925 cri.go:89] found id: ""
	I1213 19:15:35.844682   92925 logs.go:282] 1 containers: [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee]
	I1213 19:15:35.844741   92925 ssh_runner.go:195] Run: which crictl
	I1213 19:15:35.848694   92925 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:15:35.848772   92925 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:15:35.880264   92925 cri.go:89] found id: ""
	I1213 19:15:35.880293   92925 logs.go:282] 0 containers: []
	W1213 19:15:35.880302   92925 logs.go:284] No container was found matching "kindnet"
	I1213 19:15:35.880311   92925 logs.go:123] Gathering logs for etcd [c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f] ...
	I1213 19:15:35.880323   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c27cd94ed69a16e07895e78ac29a28f29ce6e60ea3a4cd418224823bc85c845f"
	I1213 19:15:35.928133   92925 logs.go:123] Gathering logs for kube-scheduler [fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168] ...
	I1213 19:15:35.928168   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fdb20aa5e42312ab9a6fcef50cf57c757e69b0f7026229aadb7d5d0a6ad99168"
	I1213 19:15:36.005056   92925 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:15:36.005095   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:15:36.088199   92925 logs.go:123] Gathering logs for kubelet ...
	I1213 19:15:36.088234   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:15:36.195615   92925 logs.go:123] Gathering logs for kube-apiserver [667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e] ...
	I1213 19:15:36.195657   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 667060dcec53f57cc3e9dc95c972999e7343ceb89f6ff071e09fc9d86bac819e"
	I1213 19:15:36.222570   92925 logs.go:123] Gathering logs for kube-scheduler [bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43] ...
	I1213 19:15:36.222597   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bdb79c020dc095331495796d3b34352b2d1ca645e1298f4ed3a7e3ec88402c43"
	I1213 19:15:36.253158   92925 logs.go:123] Gathering logs for kube-controller-manager [5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee] ...
	I1213 19:15:36.253189   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5dc19f95afeba702cb6f02d41117d0b89bd277bff86badd273c25ebfa437eaee"
	I1213 19:15:36.282294   92925 logs.go:123] Gathering logs for container status ...
	I1213 19:15:36.282324   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:15:36.315027   92925 logs.go:123] Gathering logs for dmesg ...
	I1213 19:15:36.315057   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:15:36.327415   92925 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:15:36.327445   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:15:36.397770   92925 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 19:15:36.388485   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.389249   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.391121   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392189   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392759   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 19:15:36.388485   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.389249   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.391121   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392189   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 19:15:36.392759   14006 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:15:36.397793   92925 logs.go:123] Gathering logs for etcd [808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894] ...
	I1213 19:15:36.397809   92925 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 808552c2637bb2c5ca2397dfcc60417b6a515c52b99205cf30a1d3931d592894"
	I1213 19:15:38.950291   92925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:15:38.966129   92925 out.go:203] 
	W1213 19:15:38.969186   92925 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 19:15:38.969230   92925 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 19:15:38.969244   92925 out.go:285] * Related issues:
	W1213 19:15:38.969256   92925 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 19:15:38.969271   92925 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 19:15:38.972406   92925 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.008646414Z" level=info msg="Started container" PID=1413 containerID=162b495909eae3cb5f079d5fd260e61e560cd11212e69ad52138f4180f770a5b description=kube-system/storage-provisioner/storage-provisioner id=78f061d7-6d54-48f8-b513-d5c320e8e810 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b4d0206cec1a1b4c0b5752a4babdaf8710471f5502067896b44e2d2df0c4d5b
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.011070102Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=d15204a7-37cc-4d8c-a231-166dcd68a520 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.012539045Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=6b3690d3-7f7d-43f9-95f1-1cd8e6e953ff name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.02550851Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-85rpk/coredns" id=ac3e351b-9839-445c-b06c-72f089234671 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.025812066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.048513937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.049307526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.073222358Z" level=info msg="Created container 98620d4f3c674bb9bab6e41c90c32e2b069e67c18730baafb91af41ae8c19bcf: default/busybox-7b57f96db7-h5qqv/busybox" id=3c28fa9a-be33-4fec-ad16-52c4765c6b6f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.082412808Z" level=info msg="Starting container: 98620d4f3c674bb9bab6e41c90c32e2b069e67c18730baafb91af41ae8c19bcf" id=7ee27ecf-6fea-48b9-9feb-9cb5f5270b26 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.109207129Z" level=info msg="Started container" PID=1422 containerID=98620d4f3c674bb9bab6e41c90c32e2b069e67c18730baafb91af41ae8c19bcf description=default/busybox-7b57f96db7-h5qqv/busybox id=7ee27ecf-6fea-48b9-9feb-9cb5f5270b26 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3641321fd538fed941abd3cee5bdec42be3fbe581a0a743eea30ee6edf2692ee
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.121281524Z" level=info msg="Created container 511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505: kube-system/coredns-66bc5c9577-85rpk/coredns" id=ac3e351b-9839-445c-b06c-72f089234671 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.122743263Z" level=info msg="Starting container: 511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505" id=4e4e597f-bb09-435f-a3da-58627ddb7595 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:09:40 ha-605114 crio[670]: time="2025-12-13T19:09:40.124507425Z" level=info msg="Started container" PID=1433 containerID=511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505 description=kube-system/coredns-66bc5c9577-85rpk/coredns id=4e4e597f-bb09-435f-a3da-58627ddb7595 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.122399466Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.129604955Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.129827191Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.129946091Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.139648811Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.139699543Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.139727531Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.147861576Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.148118551Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.148270222Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.153836563Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:09:45 ha-605114 crio[670]: time="2025-12-13T19:09:45.154024681Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	511836b213244       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   1d4641fc3fdac       coredns-66bc5c9577-85rpk            kube-system
	98620d4f3c674       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   2                   3641321fd538f       busybox-7b57f96db7-h5qqv            default
	162b495909eae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       4                   3b4d0206cec1a       storage-provisioner                 kube-system
	167e9e0789f86       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   7 minutes ago       Running             kube-controller-manager   7                   c35b44e70d6d7       kube-controller-manager-ha-605114   kube-system
	7bc9cb09a081e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   8 minutes ago       Exited              kube-controller-manager   6                   c35b44e70d6d7       kube-controller-manager-ha-605114   kube-system
	76f4d2ef7a334       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   9 minutes ago       Running             kube-vip                  3                   6e0df90fd1fab       kube-vip-ha-605114                  kube-system
	7db7b17ab2144       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   9 minutes ago       Running             coredns                   2                   d895cdca857a1       coredns-66bc5c9577-rc9qg            kube-system
	adb6a0d2cd304       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   9 minutes ago       Running             kube-proxy                2                   511ce74a57340       kube-proxy-c6t4v                    kube-system
	f1a416886d288       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   9 minutes ago       Running             kindnet-cni               2                   e61041a4c5e3e       kindnet-dtnb7                       kube-system
	9a81ddd488bb7       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   9 minutes ago       Running             etcd                      2                   a40bba21dff67       etcd-ha-605114                      kube-system
	ee202abc8dba3       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   9 minutes ago       Running             kube-scheduler            2                   5a646569f389f       kube-scheduler-ha-605114            kube-system
	3c729bb1538bf       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   9 minutes ago       Running             kube-apiserver            2                   390331a7238b2       kube-apiserver-ha-605114            kube-system
	2b3744a5aa7a9       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   9 minutes ago       Exited              kube-vip                  2                   6e0df90fd1fab       kube-vip-ha-605114                  kube-system
	
	
	==> coredns [511836b213244a6dfa3897abb4838a98fc68e420901993467750d852b23b8505] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60720 - 44913 "HINFO IN 3829035828325911617.4912160736216291985. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012907336s
	
	
	==> coredns [7db7b17ab2144a863bb29b6e2f750b6eb865e786cf824a74c0b415ac4077800a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58025 - 60628 "HINFO IN 3868133962360849883.307927823530690311. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.054923758s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-605114
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T18_59_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 18:59:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:17:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 18:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 18:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 18:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 19:15:26 +0000   Sat, 13 Dec 2025 19:00:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-605114
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                8ff9857c-e2f0-4d86-9970-2f9e1bad48df
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-h5qqv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-85rpk             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 coredns-66bc5c9577-rc9qg             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-ha-605114                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         18m
	  kube-system                 kindnet-dtnb7                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-605114             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-605114    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-c6t4v                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-605114             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-605114                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 9m19s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Warning  CgroupV1                 18m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     18m (x8 over 18m)      kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)      kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)      kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   Starting                 18m                    kubelet          Starting kubelet.
	  Normal   Starting                 18m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     18m                    kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  18m                    kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m                    kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           17m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-605114 status is now: NodeReady
	  Normal   RegisteredNode           16m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)      kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   Starting                 9m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node ha-605114 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node ha-605114 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m31s (x8 over 9m31s)  kubelet          Node ha-605114 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m42s                  node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	  Normal   RegisteredNode           56s                    node-controller  Node ha-605114 event: Registered Node ha-605114 in Controller
	
	
	Name:               ha-605114-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_13T19_00_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 19:00:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:07:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 13 Dec 2025 19:05:54 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-605114-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                c9a90528-cc46-44be-a006-2245d1e8d275
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-gqp98                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 etcd-ha-605114-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-hxgh6                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-605114-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-605114-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-87qlc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-605114-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-605114-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   RegisteredNode           17m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-605114-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeNotReady             12m                node-controller  Node ha-605114-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-605114-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node ha-605114-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   RegisteredNode           7m42s              node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	  Normal   NodeNotReady             6m51s              node-controller  Node ha-605114-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           56s                node-controller  Node ha-605114-m02 event: Registered Node ha-605114-m02 in Controller
	
	
	Name:               ha-605114-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_13T19_02_38_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 19:02:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:07:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 13 Dec 2025 19:06:39 +0000   Sat, 13 Dec 2025 19:10:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-605114-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                1710ae92-5ee6-4178-a2ff-b2523f5ef2e1
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wl925    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kindnet-9xnpk               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-proxy-lqp4f            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 14m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m (x3 over 14m)  kubelet          Node ha-605114-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x3 over 14m)  kubelet          Node ha-605114-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  14m (x3 over 14m)  kubelet          Node ha-605114-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           14m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   NodeReady                14m                kubelet          Node ha-605114-m04 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-605114-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-605114-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-605114-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   RegisteredNode           7m42s              node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	  Normal   NodeNotReady             6m51s              node-controller  Node ha-605114-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           56s                node-controller  Node ha-605114-m04 event: Registered Node ha-605114-m04 in Controller
	
	
	Name:               ha-605114-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-605114-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=ha-605114
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_13T19_16_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 19:16:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-605114-m05
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:17:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 19:17:15 +0000   Sat, 13 Dec 2025 19:16:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 19:17:15 +0000   Sat, 13 Dec 2025 19:16:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 19:17:15 +0000   Sat, 13 Dec 2025 19:16:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 19:17:15 +0000   Sat, 13 Dec 2025 19:17:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-605114-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                d79d0921-9cb8-408f-9cee-594e7d75ae84
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6ldgc                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 etcd-ha-605114-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         52s
	  kube-system                 kindnet-c6v4q                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-ha-605114-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-controller-manager-ha-605114-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-proxy-5h27j                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-ha-605114-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-vip-ha-605114-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        50s   kube-proxy       
	  Normal  RegisteredNode  51s   node-controller  Node ha-605114-m05 event: Registered Node ha-605114-m05 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node ha-605114-m05 event: Registered Node ha-605114-m05 in Controller
	
	
	==> dmesg <==
	[Dec13 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014739] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.517365] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033368] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.774100] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.795951] kauditd_printk_skb: 39 callbacks suppressed
	[Dec13 18:17] overlayfs: idmapped layers are currently not supported
	[  +0.067652] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 18:23] overlayfs: idmapped layers are currently not supported
	[Dec13 18:24] overlayfs: idmapped layers are currently not supported
	[Dec13 18:42] overlayfs: idmapped layers are currently not supported
	[Dec13 18:59] overlayfs: idmapped layers are currently not supported
	[ +33.753607] overlayfs: idmapped layers are currently not supported
	[Dec13 19:01] overlayfs: idmapped layers are currently not supported
	[Dec13 19:02] overlayfs: idmapped layers are currently not supported
	[Dec13 19:03] overlayfs: idmapped layers are currently not supported
	[Dec13 19:05] overlayfs: idmapped layers are currently not supported
	[  +4.041925] overlayfs: idmapped layers are currently not supported
	[ +36.958854] overlayfs: idmapped layers are currently not supported
	[Dec13 19:06] overlayfs: idmapped layers are currently not supported
	[Dec13 19:07] overlayfs: idmapped layers are currently not supported
	[  +4.088622] overlayfs: idmapped layers are currently not supported
	[Dec13 19:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9a81ddd488bb7e9ca9d20cc8af4e9414463f3bf2bd40edd26c2e9395f731a3ec] <==
	{"level":"warn","ts":"2025-12-13T19:16:17.142786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:50648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:16:17.194193Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:16:17.195086Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:16:17.230885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:50670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:16:17.256346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:50676","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T19:16:17.338622Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7"}
	{"level":"warn","ts":"2025-12-13T19:16:17.429365Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"5009f1552d554ae7","error":"failed to write 5009f1552d554ae7 on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:55896: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-13T19:16:17.429468Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7"}
	{"level":"info","ts":"2025-12-13T19:16:17.513505Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5009f1552d554ae7","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-13T19:16:17.513549Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"5009f1552d554ae7"}
	{"level":"info","ts":"2025-12-13T19:16:17.513563Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7"}
	{"level":"warn","ts":"2025-12-13T19:16:17.515698Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"5009f1552d554ae7","error":"failed to write 5009f1552d554ae7 on stream MsgApp v2 (write tcp 192.168.49.2:2380->192.168.49.6:55890: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-13T19:16:17.515782Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7"}
	{"level":"info","ts":"2025-12-13T19:16:17.623903Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"5009f1552d554ae7"}
	{"level":"info","ts":"2025-12-13T19:16:17.623955Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7"}
	{"level":"info","ts":"2025-12-13T19:16:17.724525Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5009f1552d554ae7","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-13T19:16:17.724569Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5009f1552d554ae7"}
	{"level":"warn","ts":"2025-12-13T19:16:19.304613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:53116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:16:21.341113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:53134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:16:23.365561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.6:53150","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T19:16:30.482933Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-13T19:16:35.825998Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-13T19:16:46.989497Z","caller":"etcdserver/server.go:1872","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"5009f1552d554ae7","bytes":6821221,"size":"6.8 MB","took":"30.436888515s"}
	{"level":"warn","ts":"2025-12-13T19:17:19.051923Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.73027ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:367897"}
	{"level":"info","ts":"2025-12-13T19:17:19.051986Z","caller":"traceutil/trace.go:172","msg":"trace[1872679937] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:3793; }","duration":"117.816407ms","start":"2025-12-13T19:17:18.934159Z","end":"2025-12-13T19:17:19.051975Z","steps":["trace[1872679937] 'range keys from bolt db'  (duration: 116.686715ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:17:25 up  1:59,  0 user,  load average: 0.92, 1.21, 1.34
	Linux ha-605114 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f1a416886d288f33359cd21dacc737dbed6a3c975d9323a89f8c93828c040431] <==
	I1213 19:16:55.125673       1 main.go:324] Node ha-605114-m05 has CIDR [10.244.2.0/24] 
	I1213 19:17:05.130684       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:17:05.130741       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:17:05.130977       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:17:05.130996       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:17:05.131090       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1213 19:17:05.131105       1 main.go:324] Node ha-605114-m05 has CIDR [10.244.2.0/24] 
	I1213 19:17:05.131319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:17:05.131399       1 main.go:301] handling current node
	I1213 19:17:15.121782       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:17:15.121872       1 main.go:301] handling current node
	I1213 19:17:15.121890       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:17:15.121896       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:17:15.122089       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:17:15.122114       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:17:15.122220       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1213 19:17:15.122232       1 main.go:324] Node ha-605114-m05 has CIDR [10.244.2.0/24] 
	I1213 19:17:25.128301       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:17:25.128340       1 main.go:301] handling current node
	I1213 19:17:25.128357       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1213 19:17:25.128363       1 main.go:324] Node ha-605114-m02 has CIDR [10.244.1.0/24] 
	I1213 19:17:25.128536       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1213 19:17:25.128548       1 main.go:324] Node ha-605114-m04 has CIDR [10.244.3.0/24] 
	I1213 19:17:25.128756       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1213 19:17:25.128773       1 main.go:324] Node ha-605114-m05 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [3c729bb1538bfb45bc9b5542f5524916c96b118344d2be8a42e58a0bc6d4cb0d] <==
	{"level":"warn","ts":"2025-12-13T19:09:39.225607Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012ff680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225637Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014ec3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225654Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029a8780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225669Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fc780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.225684Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fd2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231292Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fc1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231412Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019832c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231467Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001982000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231521Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400103ad20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231578Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019b2000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231633Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001f0bc20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231700Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231767Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40012fed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231831Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028461e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231883Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028461e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231933Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028461e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.231988Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001bfa5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-12-13T19:09:39.232044Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001bfa5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	W1213 19:09:41.980970       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1213 19:09:41.982698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 19:09:41.995308       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 19:09:44.281972       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 19:09:52.543985       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 19:10:34.144307       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 19:10:34.189645       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [167e9e0789f864655d959c63fd731257c88aa1e1b22515ec35f4a07af4678202] <==
	E1213 19:10:23.979852       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979884       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979949       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	E1213 19:10:23.979979       1 gc_controller.go:151] "Failed to get node" err="node \"ha-605114-m03\" not found" logger="pod-garbage-collector-controller" node="ha-605114-m03"
	I1213 19:10:24.001195       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-605114-m03"
	I1213 19:10:24.044627       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-605114-m03"
	I1213 19:10:24.044809       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-605114-m03"
	I1213 19:10:24.081792       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-605114-m03"
	I1213 19:10:24.081903       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-605114-m03"
	I1213 19:10:24.149160       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-605114-m03"
	I1213 19:10:24.149272       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-605114-m03"
	I1213 19:10:24.187394       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-605114-m03"
	I1213 19:10:24.187500       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4kfpv"
	I1213 19:10:24.241495       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4kfpv"
	I1213 19:10:24.241622       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5m48f"
	I1213 19:10:24.284484       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5m48f"
	I1213 19:10:24.284851       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-605114-m03"
	I1213 19:10:24.328812       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-605114-m03"
	I1213 19:15:34.087612       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-wl925"
	I1213 19:15:44.076408       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-gqp98"
	I1213 19:16:30.485685       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-605114-m05\" does not exist"
	I1213 19:16:30.546704       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-605114-m05" podCIDRs=["10.244.2.0/24"]
	I1213 19:16:34.286406       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-605114-m05"
	I1213 19:16:34.286778       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1213 19:17:19.294604       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-controller-manager [7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773] <==
	I1213 19:08:49.567762       1 serving.go:386] Generated self-signed cert in-memory
	I1213 19:08:50.364508       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1213 19:08:50.364608       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:08:50.366449       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1213 19:08:50.366623       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1213 19:08:50.366938       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1213 19:08:50.366991       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1213 19:09:04.386470       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [adb6a0d2cd30435f1f392f09033a5ad40b3f1d3a5a2f1fe0d2ae76a50bf8f3b4] <==
	I1213 19:08:50.244883       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	E1213 19:08:50.246471       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": http2: client connection lost"
	E1213 19:08:54.165411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:08:54.165542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:08:54.165634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:08:54.165741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:08:57.237395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:08:57.237414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:08:57.237660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:08:57.237667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:09:03.989710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:09:03.989962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:09:03.990083       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1213 19:09:03.990245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:09:03.990394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:09:15.029488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:09:15.029488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1213 19:09:15.029671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:09:15.029765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:09:18.101424       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1213 19:09:31.797443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2599\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 19:09:31.797538       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1213 19:09:31.797646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-605114&resourceVersion=2607\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:09:34.869405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1213 19:09:42.229400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2598\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	
	
	==> kube-scheduler [ee202abc8dba3b97ac56d7c3063ce4fae0734134ba47b9d6070588c897f7baf0] <==
	E1213 19:08:02.527700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 19:08:02.527776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 19:08:02.527848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 19:08:02.527900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 19:08:02.527911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 19:08:02.527950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 19:08:02.528002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 19:08:02.528106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 19:08:02.528181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 19:08:02.528340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 19:08:02.528402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 19:08:03.355200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 19:08:03.375752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 19:08:03.384341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 19:08:03.496281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 19:08:03.527514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:08:03.564170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 19:08:03.604860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 19:08:03.609546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 19:08:03.663151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 19:08:03.683755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 19:08:03.838837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 19:08:03.901316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 19:08:03.901563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1213 19:08:06.412915       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 19:09:04 ha-605114 kubelet[806]: I1213 19:09:04.239034     806 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Dec 13 19:09:04 ha-605114 kubelet[806]: E1213 19:09:04.524602     806 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods coredns-66bc5c9577-rc9qg)" podUID="0f2b52ea-d2f2-4307-8a52-619a737c2611" pod="kube-system/coredns-66bc5c9577-rc9qg"
	Dec 13 19:09:04 ha-605114 kubelet[806]: I1213 19:09:04.666266     806 scope.go:117] "RemoveContainer" containerID="38e10b9deae562bcc475d6b257111633953b93aa5e59b05a1a5aaca29705804b"
	Dec 13 19:09:04 ha-605114 kubelet[806]: I1213 19:09:04.666833     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:04 ha-605114 kubelet[806]: E1213 19:09:04.667006     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-605114_kube-system(6b36430ebbfe01869fc54848b2e1c2a9)\"" pod="kube-system/kube-controller-manager-ha-605114" podUID="6b36430ebbfe01869fc54848b2e1c2a9"
	Dec 13 19:09:05 ha-605114 kubelet[806]: E1213 19:09:05.059732     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T19:08:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"re
cursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"ha-605114\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-605114/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:06 ha-605114 kubelet[806]: I1213 19:09:06.894025     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:06 ha-605114 kubelet[806]: E1213 19:09:06.894244     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-605114_kube-system(6b36430ebbfe01869fc54848b2e1c2a9)\"" pod="kube-system/kube-controller-manager-ha-605114" podUID="6b36430ebbfe01869fc54848b2e1c2a9"
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.933737     806 projected.go:196] Error preparing data for projected volume kube-api-access-sctl2 for pod kube-system/storage-provisioner: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.933838     806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2bdd28fc-c3f6-401d-9328-27dc669e196a-kube-api-access-sctl2 podName:2bdd28fc-c3f6-401d-9328-27dc669e196a nodeName:}" failed. No retries permitted until 2025-12-13 19:09:13.933816541 +0000 UTC m=+79.712758196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sctl2" (UniqueName: "kubernetes.io/projected/2bdd28fc-c3f6-401d-9328-27dc669e196a-kube-api-access-sctl2") pod "storage-provisioner" (UID: "2bdd28fc-c3f6-401d-9328-27dc669e196a") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934020     806 projected.go:196] Error preparing data for projected volume kube-api-access-4p9km for pod kube-system/coredns-66bc5c9577-85rpk: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934081     806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d7650f5f-c93c-4824-98ba-c6242f1d9595-kube-api-access-4p9km podName:d7650f5f-c93c-4824-98ba-c6242f1d9595 nodeName:}" failed. No retries permitted until 2025-12-13 19:09:13.934068028 +0000 UTC m=+79.713009674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4p9km" (UniqueName: "kubernetes.io/projected/d7650f5f-c93c-4824-98ba-c6242f1d9595-kube-api-access-4p9km") pod "coredns-66bc5c9577-85rpk" (UID: "d7650f5f-c93c-4824-98ba-c6242f1d9595") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934128     806 projected.go:196] Error preparing data for projected volume kube-api-access-rtb9w for pod default/busybox-7b57f96db7-h5qqv: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:12 ha-605114 kubelet[806]: E1213 19:09:12.934157     806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b89d6cc7-836d-44be-997e-9a7fe221a5d8-kube-api-access-rtb9w podName:b89d6cc7-836d-44be-997e-9a7fe221a5d8 nodeName:}" failed. No retries permitted until 2025-12-13 19:09:13.934149422 +0000 UTC m=+79.713091069 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rtb9w" (UniqueName: "kubernetes.io/projected/b89d6cc7-836d-44be-997e-9a7fe221a5d8-kube-api-access-rtb9w") pod "busybox-7b57f96db7-h5qqv" (UID: "b89d6cc7-836d-44be-997e-9a7fe221a5d8") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded
	Dec 13 19:09:14 ha-605114 kubelet[806]: E1213 19:09:14.239262     806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-605114?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="200ms"
	Dec 13 19:09:15 ha-605114 kubelet[806]: E1213 19:09:15.060662     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-605114\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-605114?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:17 ha-605114 kubelet[806]: I1213 19:09:17.413956     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:17 ha-605114 kubelet[806]: E1213 19:09:17.414150     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-605114_kube-system(6b36430ebbfe01869fc54848b2e1c2a9)\"" pod="kube-system/kube-controller-manager-ha-605114" podUID="6b36430ebbfe01869fc54848b2e1c2a9"
	Dec 13 19:09:19 ha-605114 kubelet[806]: E1213 19:09:19.556378     806 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-605114.1880dbef376d6535  default   2620 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-605114,UID:ha-605114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-605114 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-605114,},FirstTimestamp:2025-12-13 19:07:54 +0000 UTC,LastTimestamp:2025-12-13 19:07:54.517705313 +0000 UTC m=+0.296646960,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-605114,}"
	Dec 13 19:09:24 ha-605114 kubelet[806]: E1213 19:09:24.441298     806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-605114?timeout=10s\": context deadline exceeded" interval="400ms"
	Dec 13 19:09:25 ha-605114 kubelet[806]: E1213 19:09:25.061462     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-605114\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-605114?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:31 ha-605114 kubelet[806]: I1213 19:09:31.414094     806 scope.go:117] "RemoveContainer" containerID="7bc9cb09a081ed47d17ecf35e2d91134eaacd5250ce00bcdebed3d1097640773"
	Dec 13 19:09:34 ha-605114 kubelet[806]: E1213 19:09:34.844103     806 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io ha-605114)" interval="800ms"
	Dec 13 19:09:35 ha-605114 kubelet[806]: E1213 19:09:35.061741     806 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-605114\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-605114?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 13 19:09:39 ha-605114 kubelet[806]: W1213 19:09:39.981430     806 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/crio-1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4 WatchSource:0}: Error finding container 1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4: Status 404 returned error can't find the container with id 1d4641fc3fdaccf9146fa15e852f55d85346be6c485420108067be6aabe0b5f4
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-605114 -n ha-605114
helpers_test.go:270: (dbg) Run:  kubectl --context ha-605114 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-7b57f96db7-jxpf7
helpers_test.go:283: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context ha-605114 describe pod busybox-7b57f96db7-jxpf7
helpers_test.go:291: (dbg) kubectl --context ha-605114 describe pod busybox-7b57f96db7-jxpf7:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-jxpf7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-696pr (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-696pr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  58s (x5 over 103s)  default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  58s (x5 over 62s)   default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  56s (x3 over 57s)   default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  56s (x3 over 57s)   default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  12s                 default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  12s                 default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:294: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (5.94s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-981625 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-981625 --output=json --user=testUser: exit status 80 (1.761378261s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"68bb64da-98f5-4d27-847e-ba83271ea32c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-981625 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"088125a4-a340-472f-a15b-d6c7206de69f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-13T19:19:06Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"1034bcd1-1888-4b84-a51c-a85b8ba9de97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-981625 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-981625 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-981625 --output=json --user=testUser: exit status 80 (1.712719023s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"73b31c9e-ad77-4bba-8873-0f4bece656df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-981625 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"75b7bfd5-daee-45c8-ae92-df937ae490b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-13T19:19:07Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"b93cdafc-e7b6-4f47-b844-80e57c1af9eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-981625 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (796.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-203932 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 19:36:42.459678    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-203932 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.249143417s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-203932
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-203932: (2.026800675s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-203932 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-203932 status --format={{.Host}}: exit status 7 (88.224424ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-203932 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-203932 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m26.423427563s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-203932] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-203932" primary control-plane node in "kubernetes-upgrade-203932" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:37:17.243991  195912 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:37:17.244259  195912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:37:17.244294  195912 out.go:374] Setting ErrFile to fd 2...
	I1213 19:37:17.244314  195912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:37:17.244732  195912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:37:17.245350  195912 out.go:368] Setting JSON to false
	I1213 19:37:17.246373  195912 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8390,"bootTime":1765646248,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 19:37:17.246478  195912 start.go:143] virtualization:  
	I1213 19:37:17.250601  195912 out.go:179] * [kubernetes-upgrade-203932] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 19:37:17.254628  195912 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 19:37:17.255642  195912 notify.go:221] Checking for updates...
	I1213 19:37:17.261140  195912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:37:17.264127  195912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:37:17.266991  195912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 19:37:17.270001  195912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 19:37:17.272996  195912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:37:17.275969  195912 config.go:182] Loaded profile config "kubernetes-upgrade-203932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 19:37:17.276535  195912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 19:37:17.309108  195912 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 19:37:17.309243  195912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:37:17.373705  195912 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-13 19:37:17.359624286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:37:17.373813  195912 docker.go:319] overlay module found
	I1213 19:37:17.377191  195912 out.go:179] * Using the docker driver based on existing profile
	I1213 19:37:17.380079  195912 start.go:309] selected driver: docker
	I1213 19:37:17.380094  195912 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-203932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-203932 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:37:17.380221  195912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:37:17.380909  195912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:37:17.439366  195912 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-13 19:37:17.429150102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:37:17.439715  195912 cni.go:84] Creating CNI manager for ""
	I1213 19:37:17.439776  195912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:37:17.439815  195912 start.go:353] cluster config:
	{Name:kubernetes-upgrade-203932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-203932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:37:17.443030  195912 out.go:179] * Starting "kubernetes-upgrade-203932" primary control-plane node in "kubernetes-upgrade-203932" cluster
	I1213 19:37:17.445791  195912 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:37:17.448740  195912 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:37:17.451747  195912 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 19:37:17.451793  195912 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 19:37:17.451807  195912 cache.go:65] Caching tarball of preloaded images
	I1213 19:37:17.451828  195912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:37:17.451906  195912 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:37:17.451917  195912 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 19:37:17.452022  195912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/kubernetes-upgrade-203932/config.json ...
	I1213 19:37:17.471254  195912 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:37:17.471277  195912 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:37:17.471294  195912 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:37:17.471321  195912 start.go:360] acquireMachinesLock for kubernetes-upgrade-203932: {Name:mk7d9409dc8ddcdd0362ea6c3d8be9caf1d61b3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:37:17.471382  195912 start.go:364] duration metric: took 38.72µs to acquireMachinesLock for "kubernetes-upgrade-203932"
	I1213 19:37:17.471408  195912 start.go:96] Skipping create...Using existing machine configuration
	I1213 19:37:17.471418  195912 fix.go:54] fixHost starting: 
	I1213 19:37:17.471674  195912 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-203932 --format={{.State.Status}}
	I1213 19:37:17.488988  195912 fix.go:112] recreateIfNeeded on kubernetes-upgrade-203932: state=Stopped err=<nil>
	W1213 19:37:17.489055  195912 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 19:37:17.492222  195912 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-203932" ...
	I1213 19:37:17.492305  195912 cli_runner.go:164] Run: docker start kubernetes-upgrade-203932
	I1213 19:37:17.821439  195912 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-203932 --format={{.State.Status}}
	I1213 19:37:17.857465  195912 kic.go:430] container "kubernetes-upgrade-203932" state is running.
	I1213 19:37:17.859738  195912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-203932
	I1213 19:37:17.890022  195912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/kubernetes-upgrade-203932/config.json ...
	I1213 19:37:17.890236  195912 machine.go:94] provisionDockerMachine start ...
	I1213 19:37:17.890301  195912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-203932
	I1213 19:37:17.918281  195912 main.go:143] libmachine: Using SSH client type: native
	I1213 19:37:17.919025  195912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1213 19:37:17.919043  195912 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:37:17.919774  195912 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 19:37:21.077020  195912 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-203932
	
	I1213 19:37:21.077046  195912 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-203932"
	I1213 19:37:21.077145  195912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-203932
	I1213 19:37:21.098518  195912 main.go:143] libmachine: Using SSH client type: native
	I1213 19:37:21.098826  195912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1213 19:37:21.098837  195912 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-203932 && echo "kubernetes-upgrade-203932" | sudo tee /etc/hostname
	I1213 19:37:21.271493  195912 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-203932
	
	I1213 19:37:21.271661  195912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-203932
	I1213 19:37:21.302645  195912 main.go:143] libmachine: Using SSH client type: native
	I1213 19:37:21.302957  195912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1213 19:37:21.302981  195912 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-203932' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-203932/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-203932' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:37:21.469807  195912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:37:21.469866  195912 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:37:21.469918  195912 ubuntu.go:190] setting up certificates
	I1213 19:37:21.469935  195912 provision.go:84] configureAuth start
	I1213 19:37:21.470051  195912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-203932
	I1213 19:37:21.501458  195912 provision.go:143] copyHostCerts
	I1213 19:37:21.501532  195912 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:37:21.501541  195912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:37:21.501613  195912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:37:21.501712  195912 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:37:21.501717  195912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:37:21.501742  195912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:37:21.501803  195912 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:37:21.501807  195912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:37:21.501831  195912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:37:21.501884  195912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-203932 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-203932 localhost minikube]
	I1213 19:37:21.649517  195912 provision.go:177] copyRemoteCerts
	I1213 19:37:21.649628  195912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:37:21.649725  195912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-203932
	I1213 19:37:21.677667  195912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/kubernetes-upgrade-203932/id_rsa Username:docker}
	I1213 19:37:21.785466  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 19:37:21.812397  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 19:37:21.833982  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:37:21.858826  195912 provision.go:87] duration metric: took 388.869818ms to configureAuth
	I1213 19:37:21.858857  195912 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:37:21.859053  195912 config.go:182] Loaded profile config "kubernetes-upgrade-203932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 19:37:21.859165  195912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-203932
	I1213 19:37:21.876948  195912 main.go:143] libmachine: Using SSH client type: native
	I1213 19:37:21.877392  195912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33003 <nil> <nil>}
	I1213 19:37:21.877429  195912 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:37:22.293279  195912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:37:22.293364  195912 machine.go:97] duration metric: took 4.403118307s to provisionDockerMachine
	I1213 19:37:22.293390  195912 start.go:293] postStartSetup for "kubernetes-upgrade-203932" (driver="docker")
	I1213 19:37:22.293435  195912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:37:22.293541  195912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:37:22.293614  195912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-203932
	I1213 19:37:22.313355  195912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/kubernetes-upgrade-203932/id_rsa Username:docker}
	I1213 19:37:22.420781  195912 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:37:22.424252  195912 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:37:22.424277  195912 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:37:22.424287  195912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:37:22.424340  195912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:37:22.424419  195912 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:37:22.424515  195912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:37:22.431694  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:37:22.449798  195912 start.go:296] duration metric: took 156.362582ms for postStartSetup
	I1213 19:37:22.449913  195912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:37:22.449994  195912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-203932
	I1213 19:37:22.467873  195912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/kubernetes-upgrade-203932/id_rsa Username:docker}
	I1213 19:37:22.575159  195912 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:37:22.580546  195912 fix.go:56] duration metric: took 5.109122482s for fixHost
	I1213 19:37:22.580570  195912 start.go:83] releasing machines lock for "kubernetes-upgrade-203932", held for 5.109174527s
	I1213 19:37:22.580648  195912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-203932
	I1213 19:37:22.602728  195912 ssh_runner.go:195] Run: cat /version.json
	I1213 19:37:22.602788  195912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-203932
	I1213 19:37:22.603103  195912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:37:22.603158  195912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-203932
	I1213 19:37:22.634470  195912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/kubernetes-upgrade-203932/id_rsa Username:docker}
	I1213 19:37:22.645935  195912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33003 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/kubernetes-upgrade-203932/id_rsa Username:docker}
	I1213 19:37:22.777720  195912 ssh_runner.go:195] Run: systemctl --version
	I1213 19:37:22.896343  195912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:37:22.943138  195912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:37:22.948637  195912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:37:22.948755  195912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:37:22.959371  195912 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 19:37:22.959433  195912 start.go:496] detecting cgroup driver to use...
	I1213 19:37:22.959494  195912 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:37:22.959575  195912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:37:22.981489  195912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:37:22.995503  195912 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:37:22.995607  195912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:37:23.012744  195912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:37:23.026539  195912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:37:23.175779  195912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:37:23.315917  195912 docker.go:234] disabling docker service ...
	I1213 19:37:23.316063  195912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:37:23.336426  195912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:37:23.352548  195912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:37:23.540449  195912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:37:23.690112  195912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:37:23.704674  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:37:23.718730  195912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:37:23.718846  195912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:37:23.727951  195912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:37:23.728074  195912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:37:23.742528  195912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:37:23.754011  195912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:37:23.763899  195912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:37:23.774510  195912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:37:23.786296  195912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:37:23.796009  195912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:37:23.805957  195912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:37:23.815001  195912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:37:23.823527  195912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:37:23.977875  195912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:37:26.125629  195912 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.147676247s)
	I1213 19:37:26.125657  195912 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:37:26.125722  195912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:37:26.136814  195912 start.go:564] Will wait 60s for crictl version
	I1213 19:37:26.136891  195912 ssh_runner.go:195] Run: which crictl
	I1213 19:37:26.141498  195912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:37:26.177865  195912 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:37:26.177948  195912 ssh_runner.go:195] Run: crio --version
	I1213 19:37:26.217517  195912 ssh_runner.go:195] Run: crio --version
	I1213 19:37:26.267654  195912 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 19:37:26.271423  195912 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-203932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:37:26.289950  195912 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 19:37:26.293811  195912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:37:26.318650  195912 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-203932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-203932 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:37:26.318783  195912 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 19:37:26.318846  195912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:37:26.356187  195912 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 19:37:26.356255  195912 ssh_runner.go:195] Run: which lz4
	I1213 19:37:26.360360  195912 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 19:37:26.365124  195912 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 19:37:26.365173  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1213 19:37:28.778240  195912 crio.go:462] duration metric: took 2.417916887s to copy over tarball
	I1213 19:37:28.778342  195912 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 19:37:30.977774  195912 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199386779s)
	I1213 19:37:30.977807  195912 crio.go:469] duration metric: took 2.199537845s to extract the tarball
	I1213 19:37:30.977816  195912 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 19:37:31.042333  195912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:37:31.108525  195912 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:37:31.108555  195912 cache_images.go:86] Images are preloaded, skipping loading
	I1213 19:37:31.108564  195912 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 19:37:31.108677  195912 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-203932 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-203932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:37:31.108765  195912 ssh_runner.go:195] Run: crio config
	I1213 19:37:31.233468  195912 cni.go:84] Creating CNI manager for ""
	I1213 19:37:31.233542  195912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:37:31.233558  195912 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 19:37:31.233595  195912 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-203932 NodeName:kubernetes-upgrade-203932 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:37:31.233917  195912 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-203932"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:37:31.234004  195912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 19:37:31.255270  195912 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:37:31.255366  195912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 19:37:31.266554  195912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1213 19:37:31.296814  195912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 19:37:31.310743  195912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1213 19:37:31.328735  195912 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 19:37:31.333357  195912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:37:31.351690  195912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:37:31.556039  195912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:37:31.574698  195912 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/kubernetes-upgrade-203932 for IP: 192.168.76.2
	I1213 19:37:31.574723  195912 certs.go:195] generating shared ca certs ...
	I1213 19:37:31.574739  195912 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:37:31.574916  195912 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:37:31.574971  195912 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:37:31.574990  195912 certs.go:257] generating profile certs ...
	I1213 19:37:31.575113  195912 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/kubernetes-upgrade-203932/client.key
	I1213 19:37:31.575193  195912 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/kubernetes-upgrade-203932/apiserver.key.053b0923
	I1213 19:37:31.575267  195912 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/kubernetes-upgrade-203932/proxy-client.key
	I1213 19:37:31.576177  195912 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:37:31.576376  195912 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:37:31.576405  195912 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:37:31.576435  195912 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:37:31.576475  195912 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:37:31.576504  195912 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:37:31.576570  195912 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:37:31.587038  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:37:31.664488  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:37:31.696229  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:37:31.716616  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:37:31.738793  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/kubernetes-upgrade-203932/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1213 19:37:31.759790  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/kubernetes-upgrade-203932/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 19:37:31.780552  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/kubernetes-upgrade-203932/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:37:31.802547  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/kubernetes-upgrade-203932/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:37:31.827739  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:37:31.848445  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:37:31.885330  195912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:37:31.910258  195912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:37:31.932595  195912 ssh_runner.go:195] Run: openssl version
	I1213 19:37:31.941305  195912 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:37:31.950282  195912 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:37:31.959007  195912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:37:31.963183  195912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:37:31.963250  195912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:37:32.008013  195912 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:37:32.016310  195912 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:37:32.024268  195912 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:37:32.032369  195912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:37:32.036931  195912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:37:32.037099  195912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:37:32.079400  195912 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:37:32.087178  195912 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:37:32.095362  195912 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:37:32.105314  195912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:37:32.112060  195912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:37:32.112201  195912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:37:32.160590  195912 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:37:32.171271  195912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:37:32.178191  195912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 19:37:32.233048  195912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 19:37:32.281058  195912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 19:37:32.329195  195912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 19:37:32.378954  195912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 19:37:32.430153  195912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 19:37:32.481540  195912 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-203932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-203932 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:37:32.481626  195912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:37:32.481700  195912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:37:32.514401  195912 cri.go:89] found id: ""
	I1213 19:37:32.514479  195912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:37:32.524081  195912 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 19:37:32.524102  195912 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 19:37:32.524166  195912 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 19:37:32.534584  195912 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:37:32.535006  195912 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-203932" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:37:32.535102  195912 kubeconfig.go:62] /home/jenkins/minikube-integration/22122-2686/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-203932" cluster setting kubeconfig missing "kubernetes-upgrade-203932" context setting]
	I1213 19:37:32.535373  195912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:37:32.535879  195912 kapi.go:59] client config for kubernetes-upgrade-203932: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/kubernetes-upgrade-203932/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/kubernetes-upgrade-203932/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 19:37:32.536382  195912 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 19:37:32.536404  195912 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 19:37:32.536409  195912 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 19:37:32.536414  195912 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 19:37:32.536418  195912 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 19:37:32.537536  195912 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 19:37:32.551860  195912 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 19:36:47.723504510 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 19:37:31.323713880 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-203932"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1213 19:37:32.551882  195912 kubeadm.go:1161] stopping kube-system containers ...
	I1213 19:37:32.551902  195912 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 19:37:32.551968  195912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:37:32.598381  195912 cri.go:89] found id: ""
	I1213 19:37:32.598451  195912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 19:37:32.618546  195912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 19:37:32.627374  195912 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 13 19:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 13 19:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 13 19:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 13 19:36 /etc/kubernetes/scheduler.conf
	
	I1213 19:37:32.627449  195912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 19:37:32.637429  195912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 19:37:32.647456  195912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 19:37:32.661166  195912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:37:32.661235  195912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 19:37:32.669912  195912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 19:37:32.678762  195912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:37:32.678830  195912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 19:37:32.686186  195912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 19:37:32.693920  195912 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 19:37:32.754485  195912 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 19:37:34.152714  195912 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.398198539s)
	I1213 19:37:34.152782  195912 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 19:37:34.401052  195912 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 19:37:34.498895  195912 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 19:37:34.553971  195912 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:37:34.554052  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:35.054478  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:35.554235  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:36.054794  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:36.554572  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:37.054491  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:37.554157  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:38.054990  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:38.554945  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:39.054115  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:39.554963  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:40.054188  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:40.554793  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:41.054192  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:41.554211  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:42.054834  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:42.554656  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:43.054803  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:43.554197  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:44.054868  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:44.554173  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:45.054511  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:45.555146  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:46.054172  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:46.554689  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:47.054988  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:47.554105  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:48.054179  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:48.554567  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:49.055090  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:49.554191  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:50.054859  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:50.554428  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:51.054446  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:51.554183  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:52.054708  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:52.555108  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:53.054202  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:53.554707  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:54.055098  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:54.554987  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:55.054309  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:55.554719  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:56.054536  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:56.554210  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:57.054929  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:57.554721  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:58.054218  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:58.554411  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:59.054920  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:37:59.554244  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:00.055361  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:00.554514  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:01.055134  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:01.554204  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:02.054187  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:02.554322  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:03.054364  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:03.554384  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:04.054201  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:04.554751  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:05.054193  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:05.554160  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:06.055180  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:06.554182  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:07.054418  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:07.554711  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:08.054975  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:08.554179  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:09.054865  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:09.554248  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:10.054276  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:10.554249  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:11.054735  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:11.554983  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:12.054892  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:12.554906  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:13.054884  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:13.554790  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:14.054817  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:14.554262  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:15.054285  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:15.555076  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:16.054840  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:16.554968  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:17.054711  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:17.554408  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:18.054144  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:18.555105  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:19.054794  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:19.554241  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:20.054141  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:20.554878  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:21.054233  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:21.554268  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:22.054274  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:22.554734  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:23.054230  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:23.555076  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:24.054775  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:24.554908  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:25.054195  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:25.554743  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:26.054153  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:26.554779  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:27.054479  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:27.555019  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:28.054517  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:28.554355  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:29.055063  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:29.555123  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:30.054247  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:30.554719  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:31.055158  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:31.554845  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:32.054233  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:32.554441  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:33.055042  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:33.555029  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:34.054210  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:34.554266  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:38:34.554378  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:38:34.578922  195912 cri.go:89] found id: ""
	I1213 19:38:34.578946  195912 logs.go:282] 0 containers: []
	W1213 19:38:34.578954  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:38:34.578960  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:38:34.579020  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:38:34.605096  195912 cri.go:89] found id: ""
	I1213 19:38:34.605124  195912 logs.go:282] 0 containers: []
	W1213 19:38:34.605134  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:38:34.605141  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:38:34.605199  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:38:34.631883  195912 cri.go:89] found id: ""
	I1213 19:38:34.631911  195912 logs.go:282] 0 containers: []
	W1213 19:38:34.631920  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:38:34.631926  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:38:34.631995  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:38:34.658172  195912 cri.go:89] found id: ""
	I1213 19:38:34.658194  195912 logs.go:282] 0 containers: []
	W1213 19:38:34.658202  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:38:34.658208  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:38:34.658263  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:38:34.683552  195912 cri.go:89] found id: ""
	I1213 19:38:34.683575  195912 logs.go:282] 0 containers: []
	W1213 19:38:34.683583  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:38:34.683589  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:38:34.683651  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:38:34.713111  195912 cri.go:89] found id: ""
	I1213 19:38:34.713134  195912 logs.go:282] 0 containers: []
	W1213 19:38:34.713143  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:38:34.713149  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:38:34.713208  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:38:34.738447  195912 cri.go:89] found id: ""
	I1213 19:38:34.738470  195912 logs.go:282] 0 containers: []
	W1213 19:38:34.738480  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:38:34.738486  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:38:34.738542  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:38:34.770329  195912 cri.go:89] found id: ""
	I1213 19:38:34.770351  195912 logs.go:282] 0 containers: []
	W1213 19:38:34.770360  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:38:34.770369  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:38:34.770380  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:38:34.841730  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:38:34.841766  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:38:34.857269  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:38:34.857342  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:38:35.188252  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:38:35.188278  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:38:35.188292  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:38:35.219077  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:38:35.219113  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:38:37.750052  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:37.760975  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:38:37.761070  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:38:37.786778  195912 cri.go:89] found id: ""
	I1213 19:38:37.786801  195912 logs.go:282] 0 containers: []
	W1213 19:38:37.786809  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:38:37.786817  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:38:37.786875  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:38:37.816393  195912 cri.go:89] found id: ""
	I1213 19:38:37.816416  195912 logs.go:282] 0 containers: []
	W1213 19:38:37.816424  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:38:37.816431  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:38:37.816490  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:38:37.845125  195912 cri.go:89] found id: ""
	I1213 19:38:37.845150  195912 logs.go:282] 0 containers: []
	W1213 19:38:37.845159  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:38:37.845166  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:38:37.845226  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:38:37.870613  195912 cri.go:89] found id: ""
	I1213 19:38:37.870637  195912 logs.go:282] 0 containers: []
	W1213 19:38:37.870645  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:38:37.870651  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:38:37.870709  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:38:37.900466  195912 cri.go:89] found id: ""
	I1213 19:38:37.900493  195912 logs.go:282] 0 containers: []
	W1213 19:38:37.900503  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:38:37.900509  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:38:37.900586  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:38:37.928016  195912 cri.go:89] found id: ""
	I1213 19:38:37.928040  195912 logs.go:282] 0 containers: []
	W1213 19:38:37.928048  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:38:37.928055  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:38:37.928139  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:38:37.954151  195912 cri.go:89] found id: ""
	I1213 19:38:37.954177  195912 logs.go:282] 0 containers: []
	W1213 19:38:37.954186  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:38:37.954214  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:38:37.954304  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:38:37.988108  195912 cri.go:89] found id: ""
	I1213 19:38:37.988143  195912 logs.go:282] 0 containers: []
	W1213 19:38:37.988152  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:38:37.988161  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:38:37.988207  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:38:38.020706  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:38:38.020741  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:38:38.052975  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:38:38.053085  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:38:38.137391  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:38:38.137430  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:38:38.154482  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:38:38.154558  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:38:38.215379  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:38:40.715539  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:40.725552  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:38:40.725628  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:38:40.763024  195912 cri.go:89] found id: ""
	I1213 19:38:40.763048  195912 logs.go:282] 0 containers: []
	W1213 19:38:40.763057  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:38:40.763063  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:38:40.763130  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:38:40.789476  195912 cri.go:89] found id: ""
	I1213 19:38:40.789499  195912 logs.go:282] 0 containers: []
	W1213 19:38:40.789507  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:38:40.789513  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:38:40.789571  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:38:40.817042  195912 cri.go:89] found id: ""
	I1213 19:38:40.817065  195912 logs.go:282] 0 containers: []
	W1213 19:38:40.817073  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:38:40.817079  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:38:40.817188  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:38:40.844282  195912 cri.go:89] found id: ""
	I1213 19:38:40.844307  195912 logs.go:282] 0 containers: []
	W1213 19:38:40.844342  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:38:40.844357  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:38:40.844432  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:38:40.869664  195912 cri.go:89] found id: ""
	I1213 19:38:40.869687  195912 logs.go:282] 0 containers: []
	W1213 19:38:40.869695  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:38:40.869701  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:38:40.869759  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:38:40.898897  195912 cri.go:89] found id: ""
	I1213 19:38:40.898963  195912 logs.go:282] 0 containers: []
	W1213 19:38:40.898978  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:38:40.898985  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:38:40.899045  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:38:40.924603  195912 cri.go:89] found id: ""
	I1213 19:38:40.924630  195912 logs.go:282] 0 containers: []
	W1213 19:38:40.924639  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:38:40.924646  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:38:40.924758  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:38:40.952107  195912 cri.go:89] found id: ""
	I1213 19:38:40.952130  195912 logs.go:282] 0 containers: []
	W1213 19:38:40.952139  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:38:40.952149  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:38:40.952181  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:38:41.016082  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:38:41.016101  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:38:41.016115  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:38:41.045968  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:38:41.046003  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:38:41.091227  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:38:41.091294  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:38:41.165040  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:38:41.165076  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:38:43.680612  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:43.690694  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:38:43.690769  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:38:43.716182  195912 cri.go:89] found id: ""
	I1213 19:38:43.716208  195912 logs.go:282] 0 containers: []
	W1213 19:38:43.716217  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:38:43.716224  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:38:43.716283  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:38:43.742786  195912 cri.go:89] found id: ""
	I1213 19:38:43.742808  195912 logs.go:282] 0 containers: []
	W1213 19:38:43.742817  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:38:43.742823  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:38:43.742880  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:38:43.768672  195912 cri.go:89] found id: ""
	I1213 19:38:43.768693  195912 logs.go:282] 0 containers: []
	W1213 19:38:43.768702  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:38:43.768708  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:38:43.768765  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:38:43.793866  195912 cri.go:89] found id: ""
	I1213 19:38:43.793976  195912 logs.go:282] 0 containers: []
	W1213 19:38:43.794011  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:38:43.794038  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:38:43.794125  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:38:43.819166  195912 cri.go:89] found id: ""
	I1213 19:38:43.819194  195912 logs.go:282] 0 containers: []
	W1213 19:38:43.819224  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:38:43.819233  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:38:43.819323  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:38:43.846946  195912 cri.go:89] found id: ""
	I1213 19:38:43.846972  195912 logs.go:282] 0 containers: []
	W1213 19:38:43.846982  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:38:43.846989  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:38:43.847047  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:38:43.871704  195912 cri.go:89] found id: ""
	I1213 19:38:43.871738  195912 logs.go:282] 0 containers: []
	W1213 19:38:43.871750  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:38:43.871756  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:38:43.871839  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:38:43.899557  195912 cri.go:89] found id: ""
	I1213 19:38:43.899586  195912 logs.go:282] 0 containers: []
	W1213 19:38:43.899595  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:38:43.899605  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:38:43.899617  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:38:43.966431  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:38:43.966468  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:38:43.982665  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:38:43.982696  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:38:44.047321  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:38:44.047379  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:38:44.047418  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:38:44.079374  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:38:44.079407  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:38:46.615627  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:46.626863  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:38:46.626924  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:38:46.668459  195912 cri.go:89] found id: ""
	I1213 19:38:46.668481  195912 logs.go:282] 0 containers: []
	W1213 19:38:46.668490  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:38:46.668496  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:38:46.668553  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:38:46.698913  195912 cri.go:89] found id: ""
	I1213 19:38:46.698936  195912 logs.go:282] 0 containers: []
	W1213 19:38:46.698945  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:38:46.698951  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:38:46.699012  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:38:46.735120  195912 cri.go:89] found id: ""
	I1213 19:38:46.735150  195912 logs.go:282] 0 containers: []
	W1213 19:38:46.735168  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:38:46.735175  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:38:46.735232  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:38:46.768326  195912 cri.go:89] found id: ""
	I1213 19:38:46.768346  195912 logs.go:282] 0 containers: []
	W1213 19:38:46.768354  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:38:46.768360  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:38:46.768415  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:38:46.799368  195912 cri.go:89] found id: ""
	I1213 19:38:46.799391  195912 logs.go:282] 0 containers: []
	W1213 19:38:46.799399  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:38:46.799406  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:38:46.799468  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:38:46.832454  195912 cri.go:89] found id: ""
	I1213 19:38:46.832527  195912 logs.go:282] 0 containers: []
	W1213 19:38:46.832550  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:38:46.832570  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:38:46.832656  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:38:46.863980  195912 cri.go:89] found id: ""
	I1213 19:38:46.864015  195912 logs.go:282] 0 containers: []
	W1213 19:38:46.864025  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:38:46.864031  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:38:46.864097  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:38:46.902825  195912 cri.go:89] found id: ""
	I1213 19:38:46.902891  195912 logs.go:282] 0 containers: []
	W1213 19:38:46.902916  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:38:46.902938  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:38:46.902976  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:38:46.992665  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:38:46.992688  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:38:46.992701  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:38:47.026675  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:38:47.026711  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:38:47.058986  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:38:47.059017  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:38:47.159912  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:38:47.159950  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:38:49.685719  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:49.695444  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:38:49.695508  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:38:49.720152  195912 cri.go:89] found id: ""
	I1213 19:38:49.720175  195912 logs.go:282] 0 containers: []
	W1213 19:38:49.720184  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:38:49.720190  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:38:49.720248  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:38:49.747306  195912 cri.go:89] found id: ""
	I1213 19:38:49.747328  195912 logs.go:282] 0 containers: []
	W1213 19:38:49.747336  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:38:49.747342  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:38:49.747411  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:38:49.773552  195912 cri.go:89] found id: ""
	I1213 19:38:49.773579  195912 logs.go:282] 0 containers: []
	W1213 19:38:49.773588  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:38:49.773594  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:38:49.773651  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:38:49.798440  195912 cri.go:89] found id: ""
	I1213 19:38:49.798470  195912 logs.go:282] 0 containers: []
	W1213 19:38:49.798479  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:38:49.798485  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:38:49.798565  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:38:49.824045  195912 cri.go:89] found id: ""
	I1213 19:38:49.824072  195912 logs.go:282] 0 containers: []
	W1213 19:38:49.824088  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:38:49.824095  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:38:49.824169  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:38:49.849923  195912 cri.go:89] found id: ""
	I1213 19:38:49.849945  195912 logs.go:282] 0 containers: []
	W1213 19:38:49.849954  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:38:49.849960  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:38:49.850024  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:38:49.883928  195912 cri.go:89] found id: ""
	I1213 19:38:49.883951  195912 logs.go:282] 0 containers: []
	W1213 19:38:49.883959  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:38:49.883966  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:38:49.884023  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:38:49.908994  195912 cri.go:89] found id: ""
	I1213 19:38:49.909061  195912 logs.go:282] 0 containers: []
	W1213 19:38:49.909071  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:38:49.909081  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:38:49.909093  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:38:49.976837  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:38:49.976882  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:38:49.997443  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:38:49.997479  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:38:50.066233  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:38:50.066310  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:38:50.066333  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:38:50.105537  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:38:50.105576  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:38:52.642979  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:52.652985  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:38:52.653088  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:38:52.682368  195912 cri.go:89] found id: ""
	I1213 19:38:52.682442  195912 logs.go:282] 0 containers: []
	W1213 19:38:52.682467  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:38:52.682481  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:38:52.682552  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:38:52.708984  195912 cri.go:89] found id: ""
	I1213 19:38:52.709043  195912 logs.go:282] 0 containers: []
	W1213 19:38:52.709053  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:38:52.709060  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:38:52.709135  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:38:52.734568  195912 cri.go:89] found id: ""
	I1213 19:38:52.734591  195912 logs.go:282] 0 containers: []
	W1213 19:38:52.734600  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:38:52.734606  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:38:52.734667  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:38:52.759598  195912 cri.go:89] found id: ""
	I1213 19:38:52.759624  195912 logs.go:282] 0 containers: []
	W1213 19:38:52.759634  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:38:52.759639  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:38:52.759697  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:38:52.785531  195912 cri.go:89] found id: ""
	I1213 19:38:52.785555  195912 logs.go:282] 0 containers: []
	W1213 19:38:52.785564  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:38:52.785570  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:38:52.785637  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:38:52.812860  195912 cri.go:89] found id: ""
	I1213 19:38:52.812889  195912 logs.go:282] 0 containers: []
	W1213 19:38:52.812897  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:38:52.812903  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:38:52.812967  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:38:52.842901  195912 cri.go:89] found id: ""
	I1213 19:38:52.842925  195912 logs.go:282] 0 containers: []
	W1213 19:38:52.842933  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:38:52.842939  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:38:52.842998  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:38:52.870173  195912 cri.go:89] found id: ""
	I1213 19:38:52.870248  195912 logs.go:282] 0 containers: []
	W1213 19:38:52.870270  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:38:52.870294  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:38:52.870322  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:38:52.939988  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:38:52.940023  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:38:52.954374  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:38:52.954405  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:38:53.024846  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:38:53.024868  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:38:53.024882  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:38:53.055256  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:38:53.055288  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:38:55.607918  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:55.620211  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:38:55.620294  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:38:55.648198  195912 cri.go:89] found id: ""
	I1213 19:38:55.648221  195912 logs.go:282] 0 containers: []
	W1213 19:38:55.648230  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:38:55.648236  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:38:55.648303  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:38:55.674074  195912 cri.go:89] found id: ""
	I1213 19:38:55.674097  195912 logs.go:282] 0 containers: []
	W1213 19:38:55.674105  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:38:55.674111  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:38:55.674178  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:38:55.700457  195912 cri.go:89] found id: ""
	I1213 19:38:55.700480  195912 logs.go:282] 0 containers: []
	W1213 19:38:55.700489  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:38:55.700494  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:38:55.700554  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:38:55.731996  195912 cri.go:89] found id: ""
	I1213 19:38:55.732025  195912 logs.go:282] 0 containers: []
	W1213 19:38:55.732034  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:38:55.732041  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:38:55.732100  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:38:55.758636  195912 cri.go:89] found id: ""
	I1213 19:38:55.758660  195912 logs.go:282] 0 containers: []
	W1213 19:38:55.758669  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:38:55.758676  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:38:55.758752  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:38:55.795347  195912 cri.go:89] found id: ""
	I1213 19:38:55.795422  195912 logs.go:282] 0 containers: []
	W1213 19:38:55.795438  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:38:55.795446  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:38:55.795510  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:38:55.827349  195912 cri.go:89] found id: ""
	I1213 19:38:55.827375  195912 logs.go:282] 0 containers: []
	W1213 19:38:55.827384  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:38:55.827390  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:38:55.827447  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:38:55.853120  195912 cri.go:89] found id: ""
	I1213 19:38:55.853192  195912 logs.go:282] 0 containers: []
	W1213 19:38:55.853216  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:38:55.853237  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:38:55.853274  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:38:55.924111  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:38:55.924155  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:38:55.938444  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:38:55.938475  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:38:56.010266  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:38:56.010286  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:38:56.010301  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:38:56.043028  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:38:56.043062  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:38:58.585787  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:38:58.596147  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:38:58.596217  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:38:58.621146  195912 cri.go:89] found id: ""
	I1213 19:38:58.621171  195912 logs.go:282] 0 containers: []
	W1213 19:38:58.621190  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:38:58.621196  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:38:58.621261  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:38:58.647757  195912 cri.go:89] found id: ""
	I1213 19:38:58.647783  195912 logs.go:282] 0 containers: []
	W1213 19:38:58.647791  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:38:58.647797  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:38:58.647855  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:38:58.674766  195912 cri.go:89] found id: ""
	I1213 19:38:58.674796  195912 logs.go:282] 0 containers: []
	W1213 19:38:58.674804  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:38:58.674810  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:38:58.674871  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:38:58.704431  195912 cri.go:89] found id: ""
	I1213 19:38:58.704456  195912 logs.go:282] 0 containers: []
	W1213 19:38:58.704465  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:38:58.704472  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:38:58.704534  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:38:58.730243  195912 cri.go:89] found id: ""
	I1213 19:38:58.730266  195912 logs.go:282] 0 containers: []
	W1213 19:38:58.730275  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:38:58.730281  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:38:58.730340  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:38:58.756281  195912 cri.go:89] found id: ""
	I1213 19:38:58.756305  195912 logs.go:282] 0 containers: []
	W1213 19:38:58.756314  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:38:58.756319  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:38:58.756377  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:38:58.784673  195912 cri.go:89] found id: ""
	I1213 19:38:58.784703  195912 logs.go:282] 0 containers: []
	W1213 19:38:58.784713  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:38:58.784719  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:38:58.784778  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:38:58.810925  195912 cri.go:89] found id: ""
	I1213 19:38:58.810949  195912 logs.go:282] 0 containers: []
	W1213 19:38:58.810958  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:38:58.810966  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:38:58.810977  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:38:58.879143  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:38:58.879184  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:38:58.898952  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:38:58.898983  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:38:58.966946  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:38:58.966966  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:38:58.966978  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:38:59.000310  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:38:59.000345  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:01.532562  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:01.544745  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:01.544815  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:01.572397  195912 cri.go:89] found id: ""
	I1213 19:39:01.572423  195912 logs.go:282] 0 containers: []
	W1213 19:39:01.572432  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:01.572439  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:01.572515  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:01.598496  195912 cri.go:89] found id: ""
	I1213 19:39:01.598522  195912 logs.go:282] 0 containers: []
	W1213 19:39:01.598531  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:01.598537  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:01.598598  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:01.624682  195912 cri.go:89] found id: ""
	I1213 19:39:01.624710  195912 logs.go:282] 0 containers: []
	W1213 19:39:01.624721  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:01.624727  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:01.624841  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:01.651385  195912 cri.go:89] found id: ""
	I1213 19:39:01.651453  195912 logs.go:282] 0 containers: []
	W1213 19:39:01.651469  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:01.651476  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:01.651551  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:01.681403  195912 cri.go:89] found id: ""
	I1213 19:39:01.681424  195912 logs.go:282] 0 containers: []
	W1213 19:39:01.681432  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:01.681438  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:01.681496  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:01.708019  195912 cri.go:89] found id: ""
	I1213 19:39:01.708045  195912 logs.go:282] 0 containers: []
	W1213 19:39:01.708055  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:01.708062  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:01.708147  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:01.734847  195912 cri.go:89] found id: ""
	I1213 19:39:01.734913  195912 logs.go:282] 0 containers: []
	W1213 19:39:01.734929  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:01.734938  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:01.735003  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:01.761822  195912 cri.go:89] found id: ""
	I1213 19:39:01.761859  195912 logs.go:282] 0 containers: []
	W1213 19:39:01.761868  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:01.761876  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:01.761889  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:01.828834  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:01.828874  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:01.843655  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:01.843686  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:01.913806  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:01.913828  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:01.913840  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:01.944938  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:01.944972  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:04.479191  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:04.489717  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:04.489799  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:04.516335  195912 cri.go:89] found id: ""
	I1213 19:39:04.516357  195912 logs.go:282] 0 containers: []
	W1213 19:39:04.516365  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:04.516371  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:04.516434  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:04.546461  195912 cri.go:89] found id: ""
	I1213 19:39:04.546487  195912 logs.go:282] 0 containers: []
	W1213 19:39:04.546496  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:04.546502  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:04.546561  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:04.573165  195912 cri.go:89] found id: ""
	I1213 19:39:04.573191  195912 logs.go:282] 0 containers: []
	W1213 19:39:04.573200  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:04.573206  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:04.573266  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:04.602221  195912 cri.go:89] found id: ""
	I1213 19:39:04.602246  195912 logs.go:282] 0 containers: []
	W1213 19:39:04.602255  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:04.602261  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:04.602323  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:04.628393  195912 cri.go:89] found id: ""
	I1213 19:39:04.628420  195912 logs.go:282] 0 containers: []
	W1213 19:39:04.628429  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:04.628435  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:04.628494  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:04.656043  195912 cri.go:89] found id: ""
	I1213 19:39:04.656084  195912 logs.go:282] 0 containers: []
	W1213 19:39:04.656092  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:04.656099  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:04.656156  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:04.683032  195912 cri.go:89] found id: ""
	I1213 19:39:04.683058  195912 logs.go:282] 0 containers: []
	W1213 19:39:04.683067  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:04.683073  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:04.683160  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:04.712432  195912 cri.go:89] found id: ""
	I1213 19:39:04.712459  195912 logs.go:282] 0 containers: []
	W1213 19:39:04.712468  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:04.712477  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:04.712489  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:04.754538  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:04.754567  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:04.843025  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:04.843172  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:04.865078  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:04.865166  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:04.958957  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:04.958981  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:04.958994  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:07.497262  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:07.507603  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:07.507677  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:07.540291  195912 cri.go:89] found id: ""
	I1213 19:39:07.540320  195912 logs.go:282] 0 containers: []
	W1213 19:39:07.540328  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:07.540334  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:07.540397  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:07.566422  195912 cri.go:89] found id: ""
	I1213 19:39:07.566444  195912 logs.go:282] 0 containers: []
	W1213 19:39:07.566452  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:07.566458  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:07.566521  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:07.591919  195912 cri.go:89] found id: ""
	I1213 19:39:07.591943  195912 logs.go:282] 0 containers: []
	W1213 19:39:07.591951  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:07.591958  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:07.592034  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:07.622355  195912 cri.go:89] found id: ""
	I1213 19:39:07.622379  195912 logs.go:282] 0 containers: []
	W1213 19:39:07.622387  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:07.622394  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:07.622455  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:07.650691  195912 cri.go:89] found id: ""
	I1213 19:39:07.650715  195912 logs.go:282] 0 containers: []
	W1213 19:39:07.650724  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:07.650730  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:07.650799  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:07.677960  195912 cri.go:89] found id: ""
	I1213 19:39:07.678000  195912 logs.go:282] 0 containers: []
	W1213 19:39:07.678009  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:07.678015  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:07.678084  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:07.708239  195912 cri.go:89] found id: ""
	I1213 19:39:07.708309  195912 logs.go:282] 0 containers: []
	W1213 19:39:07.708333  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:07.708356  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:07.708446  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:07.734399  195912 cri.go:89] found id: ""
	I1213 19:39:07.734467  195912 logs.go:282] 0 containers: []
	W1213 19:39:07.734482  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:07.734491  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:07.734504  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:07.802019  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:07.802038  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:07.802049  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:07.834107  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:07.834140  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:07.865645  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:07.865675  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:07.931934  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:07.931970  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:10.446727  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:10.457585  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:10.457653  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:10.506181  195912 cri.go:89] found id: ""
	I1213 19:39:10.506204  195912 logs.go:282] 0 containers: []
	W1213 19:39:10.506213  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:10.506219  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:10.506283  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:10.547110  195912 cri.go:89] found id: ""
	I1213 19:39:10.547137  195912 logs.go:282] 0 containers: []
	W1213 19:39:10.547147  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:10.547153  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:10.547211  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:10.585705  195912 cri.go:89] found id: ""
	I1213 19:39:10.585727  195912 logs.go:282] 0 containers: []
	W1213 19:39:10.585736  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:10.585746  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:10.585806  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:10.614289  195912 cri.go:89] found id: ""
	I1213 19:39:10.614311  195912 logs.go:282] 0 containers: []
	W1213 19:39:10.614320  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:10.614326  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:10.614381  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:10.651150  195912 cri.go:89] found id: ""
	I1213 19:39:10.651173  195912 logs.go:282] 0 containers: []
	W1213 19:39:10.651181  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:10.651188  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:10.651249  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:10.682388  195912 cri.go:89] found id: ""
	I1213 19:39:10.682412  195912 logs.go:282] 0 containers: []
	W1213 19:39:10.682421  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:10.682427  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:10.682495  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:10.718026  195912 cri.go:89] found id: ""
	I1213 19:39:10.718048  195912 logs.go:282] 0 containers: []
	W1213 19:39:10.718057  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:10.718063  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:10.718122  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:10.755264  195912 cri.go:89] found id: ""
	I1213 19:39:10.755286  195912 logs.go:282] 0 containers: []
	W1213 19:39:10.755294  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:10.755303  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:10.755314  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:10.786553  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:10.786587  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:10.830504  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:10.830532  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:10.909391  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:10.909428  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:10.925097  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:10.925124  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:10.994136  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:13.494838  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:13.505128  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:13.505198  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:13.534143  195912 cri.go:89] found id: ""
	I1213 19:39:13.534169  195912 logs.go:282] 0 containers: []
	W1213 19:39:13.534178  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:13.534184  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:13.534242  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:13.559664  195912 cri.go:89] found id: ""
	I1213 19:39:13.559686  195912 logs.go:282] 0 containers: []
	W1213 19:39:13.559697  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:13.559703  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:13.559763  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:13.586134  195912 cri.go:89] found id: ""
	I1213 19:39:13.586160  195912 logs.go:282] 0 containers: []
	W1213 19:39:13.586170  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:13.586176  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:13.586234  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:13.611218  195912 cri.go:89] found id: ""
	I1213 19:39:13.611241  195912 logs.go:282] 0 containers: []
	W1213 19:39:13.611250  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:13.611256  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:13.611316  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:13.635718  195912 cri.go:89] found id: ""
	I1213 19:39:13.635743  195912 logs.go:282] 0 containers: []
	W1213 19:39:13.635752  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:13.635758  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:13.635816  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:13.661397  195912 cri.go:89] found id: ""
	I1213 19:39:13.661425  195912 logs.go:282] 0 containers: []
	W1213 19:39:13.661434  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:13.661442  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:13.661501  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:13.687817  195912 cri.go:89] found id: ""
	I1213 19:39:13.687843  195912 logs.go:282] 0 containers: []
	W1213 19:39:13.687852  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:13.687859  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:13.687920  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:13.717175  195912 cri.go:89] found id: ""
	I1213 19:39:13.717201  195912 logs.go:282] 0 containers: []
	W1213 19:39:13.717210  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:13.717218  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:13.717230  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:13.751010  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:13.751036  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:13.818691  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:13.818730  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:13.832705  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:13.832743  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:13.893252  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:13.893274  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:13.893287  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:16.425843  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:16.435337  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:16.435401  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:16.460361  195912 cri.go:89] found id: ""
	I1213 19:39:16.460386  195912 logs.go:282] 0 containers: []
	W1213 19:39:16.460395  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:16.460401  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:16.460470  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:16.486731  195912 cri.go:89] found id: ""
	I1213 19:39:16.486768  195912 logs.go:282] 0 containers: []
	W1213 19:39:16.486777  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:16.486783  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:16.486852  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:16.516412  195912 cri.go:89] found id: ""
	I1213 19:39:16.516437  195912 logs.go:282] 0 containers: []
	W1213 19:39:16.516445  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:16.516452  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:16.516521  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:16.546047  195912 cri.go:89] found id: ""
	I1213 19:39:16.546070  195912 logs.go:282] 0 containers: []
	W1213 19:39:16.546078  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:16.546085  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:16.546143  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:16.571223  195912 cri.go:89] found id: ""
	I1213 19:39:16.571248  195912 logs.go:282] 0 containers: []
	W1213 19:39:16.571257  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:16.571263  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:16.571324  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:16.595759  195912 cri.go:89] found id: ""
	I1213 19:39:16.595823  195912 logs.go:282] 0 containers: []
	W1213 19:39:16.595846  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:16.595866  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:16.595945  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:16.622647  195912 cri.go:89] found id: ""
	I1213 19:39:16.622674  195912 logs.go:282] 0 containers: []
	W1213 19:39:16.622684  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:16.622696  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:16.622780  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:16.652141  195912 cri.go:89] found id: ""
	I1213 19:39:16.652162  195912 logs.go:282] 0 containers: []
	W1213 19:39:16.652173  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:16.652182  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:16.652194  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:16.666168  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:16.666199  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:16.730508  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:16.730571  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:16.730596  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:16.762169  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:16.762205  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:16.790155  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:16.790182  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:19.361109  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:19.376542  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:19.376607  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:19.419467  195912 cri.go:89] found id: ""
	I1213 19:39:19.419504  195912 logs.go:282] 0 containers: []
	W1213 19:39:19.419513  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:19.419520  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:19.419585  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:19.459423  195912 cri.go:89] found id: ""
	I1213 19:39:19.459444  195912 logs.go:282] 0 containers: []
	W1213 19:39:19.459452  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:19.459458  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:19.459511  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:19.499383  195912 cri.go:89] found id: ""
	I1213 19:39:19.499403  195912 logs.go:282] 0 containers: []
	W1213 19:39:19.499411  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:19.499417  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:19.499478  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:19.535415  195912 cri.go:89] found id: ""
	I1213 19:39:19.535438  195912 logs.go:282] 0 containers: []
	W1213 19:39:19.535457  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:19.535463  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:19.535519  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:19.568839  195912 cri.go:89] found id: ""
	I1213 19:39:19.568862  195912 logs.go:282] 0 containers: []
	W1213 19:39:19.568870  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:19.568876  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:19.568938  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:19.599402  195912 cri.go:89] found id: ""
	I1213 19:39:19.599423  195912 logs.go:282] 0 containers: []
	W1213 19:39:19.599432  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:19.599439  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:19.599495  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:19.628670  195912 cri.go:89] found id: ""
	I1213 19:39:19.628692  195912 logs.go:282] 0 containers: []
	W1213 19:39:19.628700  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:19.628706  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:19.628764  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:19.658672  195912 cri.go:89] found id: ""
	I1213 19:39:19.658750  195912 logs.go:282] 0 containers: []
	W1213 19:39:19.658775  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:19.658797  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:19.658841  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:19.737075  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:19.737111  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:19.752932  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:19.752964  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:19.834095  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:19.834117  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:19.834129  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:19.868198  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:19.868235  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:22.406580  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:22.416309  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:22.416378  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:22.442234  195912 cri.go:89] found id: ""
	I1213 19:39:22.442265  195912 logs.go:282] 0 containers: []
	W1213 19:39:22.442274  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:22.442280  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:22.442342  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:22.467941  195912 cri.go:89] found id: ""
	I1213 19:39:22.467963  195912 logs.go:282] 0 containers: []
	W1213 19:39:22.467971  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:22.467978  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:22.468037  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:22.494710  195912 cri.go:89] found id: ""
	I1213 19:39:22.494733  195912 logs.go:282] 0 containers: []
	W1213 19:39:22.494741  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:22.494747  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:22.494804  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:22.520331  195912 cri.go:89] found id: ""
	I1213 19:39:22.520353  195912 logs.go:282] 0 containers: []
	W1213 19:39:22.520361  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:22.520368  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:22.520425  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:22.548297  195912 cri.go:89] found id: ""
	I1213 19:39:22.548319  195912 logs.go:282] 0 containers: []
	W1213 19:39:22.548328  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:22.548334  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:22.548394  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:22.574665  195912 cri.go:89] found id: ""
	I1213 19:39:22.574690  195912 logs.go:282] 0 containers: []
	W1213 19:39:22.574699  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:22.574705  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:22.574762  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:22.599759  195912 cri.go:89] found id: ""
	I1213 19:39:22.599782  195912 logs.go:282] 0 containers: []
	W1213 19:39:22.599790  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:22.599796  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:22.599857  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:22.625399  195912 cri.go:89] found id: ""
	I1213 19:39:22.625426  195912 logs.go:282] 0 containers: []
	W1213 19:39:22.625435  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:22.625445  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:22.625466  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:22.654193  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:22.654220  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:22.722607  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:22.722646  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:22.737480  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:22.737509  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:22.799637  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:22.799657  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:22.799671  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:25.333115  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:25.343576  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:25.343662  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:25.373218  195912 cri.go:89] found id: ""
	I1213 19:39:25.373240  195912 logs.go:282] 0 containers: []
	W1213 19:39:25.373249  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:25.373255  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:25.373317  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:25.407538  195912 cri.go:89] found id: ""
	I1213 19:39:25.407560  195912 logs.go:282] 0 containers: []
	W1213 19:39:25.407573  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:25.407579  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:25.407650  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:25.433094  195912 cri.go:89] found id: ""
	I1213 19:39:25.433115  195912 logs.go:282] 0 containers: []
	W1213 19:39:25.433123  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:25.433129  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:25.433190  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:25.459427  195912 cri.go:89] found id: ""
	I1213 19:39:25.459500  195912 logs.go:282] 0 containers: []
	W1213 19:39:25.459525  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:25.459545  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:25.459677  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:25.484692  195912 cri.go:89] found id: ""
	I1213 19:39:25.484717  195912 logs.go:282] 0 containers: []
	W1213 19:39:25.484726  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:25.484731  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:25.484831  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:25.510430  195912 cri.go:89] found id: ""
	I1213 19:39:25.510510  195912 logs.go:282] 0 containers: []
	W1213 19:39:25.510526  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:25.510533  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:25.510616  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:25.536623  195912 cri.go:89] found id: ""
	I1213 19:39:25.536695  195912 logs.go:282] 0 containers: []
	W1213 19:39:25.536719  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:25.536738  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:25.536826  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:25.562572  195912 cri.go:89] found id: ""
	I1213 19:39:25.562636  195912 logs.go:282] 0 containers: []
	W1213 19:39:25.562659  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:25.562675  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:25.562686  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:25.639466  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:25.639512  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:25.657612  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:25.657644  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:25.765589  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:25.765614  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:25.765626  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:25.798704  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:25.798741  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:28.338248  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:28.349028  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:28.349097  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:28.379029  195912 cri.go:89] found id: ""
	I1213 19:39:28.379051  195912 logs.go:282] 0 containers: []
	W1213 19:39:28.379059  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:28.379065  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:28.379123  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:28.407307  195912 cri.go:89] found id: ""
	I1213 19:39:28.407334  195912 logs.go:282] 0 containers: []
	W1213 19:39:28.407344  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:28.407350  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:28.407409  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:28.434927  195912 cri.go:89] found id: ""
	I1213 19:39:28.434949  195912 logs.go:282] 0 containers: []
	W1213 19:39:28.434957  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:28.434963  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:28.435023  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:28.460679  195912 cri.go:89] found id: ""
	I1213 19:39:28.460708  195912 logs.go:282] 0 containers: []
	W1213 19:39:28.460716  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:28.460722  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:28.460778  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:28.489766  195912 cri.go:89] found id: ""
	I1213 19:39:28.489792  195912 logs.go:282] 0 containers: []
	W1213 19:39:28.489800  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:28.489807  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:28.489866  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:28.514969  195912 cri.go:89] found id: ""
	I1213 19:39:28.514994  195912 logs.go:282] 0 containers: []
	W1213 19:39:28.515003  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:28.515009  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:28.515070  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:28.546721  195912 cri.go:89] found id: ""
	I1213 19:39:28.546745  195912 logs.go:282] 0 containers: []
	W1213 19:39:28.546753  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:28.546761  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:28.546820  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:28.579689  195912 cri.go:89] found id: ""
	I1213 19:39:28.579718  195912 logs.go:282] 0 containers: []
	W1213 19:39:28.579732  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:28.579741  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:28.579757  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:28.649603  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:28.649641  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:28.663742  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:28.663821  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:28.733333  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:28.733404  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:28.733425  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:28.764387  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:28.764420  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:31.296309  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:31.306921  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:31.306991  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:31.337577  195912 cri.go:89] found id: ""
	I1213 19:39:31.337601  195912 logs.go:282] 0 containers: []
	W1213 19:39:31.337610  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:31.337616  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:31.337677  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:31.366862  195912 cri.go:89] found id: ""
	I1213 19:39:31.366883  195912 logs.go:282] 0 containers: []
	W1213 19:39:31.366891  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:31.366896  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:31.366956  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:31.400307  195912 cri.go:89] found id: ""
	I1213 19:39:31.400329  195912 logs.go:282] 0 containers: []
	W1213 19:39:31.400338  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:31.400343  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:31.400400  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:31.426592  195912 cri.go:89] found id: ""
	I1213 19:39:31.426617  195912 logs.go:282] 0 containers: []
	W1213 19:39:31.426626  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:31.426633  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:31.426693  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:31.452698  195912 cri.go:89] found id: ""
	I1213 19:39:31.452723  195912 logs.go:282] 0 containers: []
	W1213 19:39:31.452732  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:31.452739  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:31.452799  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:31.481675  195912 cri.go:89] found id: ""
	I1213 19:39:31.481713  195912 logs.go:282] 0 containers: []
	W1213 19:39:31.481738  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:31.481745  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:31.481841  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:31.515132  195912 cri.go:89] found id: ""
	I1213 19:39:31.515158  195912 logs.go:282] 0 containers: []
	W1213 19:39:31.515167  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:31.515173  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:31.515232  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:31.541142  195912 cri.go:89] found id: ""
	I1213 19:39:31.541164  195912 logs.go:282] 0 containers: []
	W1213 19:39:31.541173  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:31.541182  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:31.541195  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:31.572141  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:31.572175  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:31.601607  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:31.601636  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:31.673546  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:31.673583  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:31.687824  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:31.687857  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:31.754964  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:34.255224  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:34.265459  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:34.265532  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:34.291535  195912 cri.go:89] found id: ""
	I1213 19:39:34.291558  195912 logs.go:282] 0 containers: []
	W1213 19:39:34.291572  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:34.291583  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:34.291669  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:34.316812  195912 cri.go:89] found id: ""
	I1213 19:39:34.316837  195912 logs.go:282] 0 containers: []
	W1213 19:39:34.316845  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:34.316851  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:34.316912  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:34.352453  195912 cri.go:89] found id: ""
	I1213 19:39:34.352485  195912 logs.go:282] 0 containers: []
	W1213 19:39:34.352500  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:34.352507  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:34.352581  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:34.386348  195912 cri.go:89] found id: ""
	I1213 19:39:34.386373  195912 logs.go:282] 0 containers: []
	W1213 19:39:34.386390  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:34.386398  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:34.386459  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:34.426574  195912 cri.go:89] found id: ""
	I1213 19:39:34.426598  195912 logs.go:282] 0 containers: []
	W1213 19:39:34.426608  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:34.426614  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:34.426673  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:34.451232  195912 cri.go:89] found id: ""
	I1213 19:39:34.451258  195912 logs.go:282] 0 containers: []
	W1213 19:39:34.451267  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:34.451273  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:34.451333  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:34.477779  195912 cri.go:89] found id: ""
	I1213 19:39:34.477802  195912 logs.go:282] 0 containers: []
	W1213 19:39:34.477811  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:34.477817  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:34.477887  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:34.503655  195912 cri.go:89] found id: ""
	I1213 19:39:34.503682  195912 logs.go:282] 0 containers: []
	W1213 19:39:34.503692  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:34.503702  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:34.503730  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:34.535046  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:34.535084  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:34.564182  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:34.564208  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:34.635113  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:34.635146  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:34.649643  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:34.649673  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:34.718426  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:37.220160  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:37.229873  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:37.229945  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:37.255447  195912 cri.go:89] found id: ""
	I1213 19:39:37.255467  195912 logs.go:282] 0 containers: []
	W1213 19:39:37.255476  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:37.255482  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:37.255538  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:37.280968  195912 cri.go:89] found id: ""
	I1213 19:39:37.280990  195912 logs.go:282] 0 containers: []
	W1213 19:39:37.280998  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:37.281046  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:37.281103  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:37.305480  195912 cri.go:89] found id: ""
	I1213 19:39:37.305504  195912 logs.go:282] 0 containers: []
	W1213 19:39:37.305513  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:37.305519  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:37.305581  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:37.340239  195912 cri.go:89] found id: ""
	I1213 19:39:37.340268  195912 logs.go:282] 0 containers: []
	W1213 19:39:37.340276  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:37.340288  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:37.340356  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:37.370224  195912 cri.go:89] found id: ""
	I1213 19:39:37.370245  195912 logs.go:282] 0 containers: []
	W1213 19:39:37.370253  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:37.370259  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:37.370336  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:37.399608  195912 cri.go:89] found id: ""
	I1213 19:39:37.399632  195912 logs.go:282] 0 containers: []
	W1213 19:39:37.399645  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:37.399653  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:37.399725  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:37.426405  195912 cri.go:89] found id: ""
	I1213 19:39:37.426433  195912 logs.go:282] 0 containers: []
	W1213 19:39:37.426442  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:37.426448  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:37.426505  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:37.450464  195912 cri.go:89] found id: ""
	I1213 19:39:37.450487  195912 logs.go:282] 0 containers: []
	W1213 19:39:37.450496  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:37.450506  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:37.450520  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:37.515181  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:37.515199  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:37.515211  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:37.546327  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:37.546360  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:37.575365  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:37.575396  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:37.648584  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:37.648622  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:40.163375  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:40.174364  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:40.174432  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:40.207104  195912 cri.go:89] found id: ""
	I1213 19:39:40.207125  195912 logs.go:282] 0 containers: []
	W1213 19:39:40.207133  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:40.207139  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:40.207205  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:40.266053  195912 cri.go:89] found id: ""
	I1213 19:39:40.266074  195912 logs.go:282] 0 containers: []
	W1213 19:39:40.266082  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:40.266088  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:40.266147  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:40.303320  195912 cri.go:89] found id: ""
	I1213 19:39:40.303350  195912 logs.go:282] 0 containers: []
	W1213 19:39:40.303359  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:40.303365  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:40.303424  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:40.343238  195912 cri.go:89] found id: ""
	I1213 19:39:40.343263  195912 logs.go:282] 0 containers: []
	W1213 19:39:40.343271  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:40.343277  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:40.343336  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:40.378911  195912 cri.go:89] found id: ""
	I1213 19:39:40.378940  195912 logs.go:282] 0 containers: []
	W1213 19:39:40.378948  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:40.378954  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:40.379013  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:40.424040  195912 cri.go:89] found id: ""
	I1213 19:39:40.424066  195912 logs.go:282] 0 containers: []
	W1213 19:39:40.424075  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:40.424081  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:40.424142  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:40.465261  195912 cri.go:89] found id: ""
	I1213 19:39:40.465288  195912 logs.go:282] 0 containers: []
	W1213 19:39:40.465300  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:40.465307  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:40.465363  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:40.499254  195912 cri.go:89] found id: ""
	I1213 19:39:40.499283  195912 logs.go:282] 0 containers: []
	W1213 19:39:40.499292  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:40.499300  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:40.499311  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:40.575611  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:40.575649  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:40.591358  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:40.591388  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:40.657547  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:40.657572  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:40.657596  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:40.689540  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:40.689573  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:43.218500  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:43.237789  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:43.237863  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:43.268795  195912 cri.go:89] found id: ""
	I1213 19:39:43.268821  195912 logs.go:282] 0 containers: []
	W1213 19:39:43.268830  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:43.268836  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:43.268903  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:43.308325  195912 cri.go:89] found id: ""
	I1213 19:39:43.308352  195912 logs.go:282] 0 containers: []
	W1213 19:39:43.308361  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:43.308367  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:43.308427  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:43.348957  195912 cri.go:89] found id: ""
	I1213 19:39:43.348983  195912 logs.go:282] 0 containers: []
	W1213 19:39:43.348991  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:43.348997  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:43.349120  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:43.399341  195912 cri.go:89] found id: ""
	I1213 19:39:43.399367  195912 logs.go:282] 0 containers: []
	W1213 19:39:43.399376  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:43.399382  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:43.399439  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:43.438482  195912 cri.go:89] found id: ""
	I1213 19:39:43.438505  195912 logs.go:282] 0 containers: []
	W1213 19:39:43.438513  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:43.438519  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:43.438582  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:43.475546  195912 cri.go:89] found id: ""
	I1213 19:39:43.475569  195912 logs.go:282] 0 containers: []
	W1213 19:39:43.475577  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:43.475591  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:43.475656  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:43.509494  195912 cri.go:89] found id: ""
	I1213 19:39:43.509517  195912 logs.go:282] 0 containers: []
	W1213 19:39:43.509525  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:43.509537  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:43.509602  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:43.547727  195912 cri.go:89] found id: ""
	I1213 19:39:43.547803  195912 logs.go:282] 0 containers: []
	W1213 19:39:43.547834  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:43.547857  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:43.547894  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:43.588115  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:43.588140  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:43.666002  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:43.666060  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:43.683055  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:43.683136  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:43.765918  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:43.766003  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:43.766031  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:46.303878  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:46.314070  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:46.314148  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:46.351340  195912 cri.go:89] found id: ""
	I1213 19:39:46.351362  195912 logs.go:282] 0 containers: []
	W1213 19:39:46.351370  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:46.351377  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:46.351436  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:46.390444  195912 cri.go:89] found id: ""
	I1213 19:39:46.390468  195912 logs.go:282] 0 containers: []
	W1213 19:39:46.390477  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:46.390483  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:46.390548  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:46.421256  195912 cri.go:89] found id: ""
	I1213 19:39:46.421279  195912 logs.go:282] 0 containers: []
	W1213 19:39:46.421287  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:46.421293  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:46.421383  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:46.450978  195912 cri.go:89] found id: ""
	I1213 19:39:46.451006  195912 logs.go:282] 0 containers: []
	W1213 19:39:46.451016  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:46.451022  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:46.451087  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:46.477237  195912 cri.go:89] found id: ""
	I1213 19:39:46.477259  195912 logs.go:282] 0 containers: []
	W1213 19:39:46.477268  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:46.477274  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:46.477333  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:46.502185  195912 cri.go:89] found id: ""
	I1213 19:39:46.502212  195912 logs.go:282] 0 containers: []
	W1213 19:39:46.502220  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:46.502226  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:46.502285  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:46.528692  195912 cri.go:89] found id: ""
	I1213 19:39:46.528717  195912 logs.go:282] 0 containers: []
	W1213 19:39:46.528729  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:46.528760  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:46.528849  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:46.557090  195912 cri.go:89] found id: ""
	I1213 19:39:46.557113  195912 logs.go:282] 0 containers: []
	W1213 19:39:46.557122  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:46.557132  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:46.557144  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:46.572572  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:46.572652  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:46.660071  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:46.660139  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:46.660165  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:46.695232  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:46.695291  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:46.735568  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:46.735647  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:49.311152  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:49.321269  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:49.321340  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:49.358034  195912 cri.go:89] found id: ""
	I1213 19:39:49.358060  195912 logs.go:282] 0 containers: []
	W1213 19:39:49.358070  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:49.358076  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:49.358139  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:49.387411  195912 cri.go:89] found id: ""
	I1213 19:39:49.387436  195912 logs.go:282] 0 containers: []
	W1213 19:39:49.387446  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:49.387452  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:49.387519  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:49.415110  195912 cri.go:89] found id: ""
	I1213 19:39:49.415133  195912 logs.go:282] 0 containers: []
	W1213 19:39:49.415141  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:49.415155  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:49.415214  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:49.443333  195912 cri.go:89] found id: ""
	I1213 19:39:49.443358  195912 logs.go:282] 0 containers: []
	W1213 19:39:49.443367  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:49.443374  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:49.443432  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:49.471826  195912 cri.go:89] found id: ""
	I1213 19:39:49.471854  195912 logs.go:282] 0 containers: []
	W1213 19:39:49.471863  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:49.471869  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:49.471928  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:49.499578  195912 cri.go:89] found id: ""
	I1213 19:39:49.499606  195912 logs.go:282] 0 containers: []
	W1213 19:39:49.499615  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:49.499621  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:49.499681  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:49.526362  195912 cri.go:89] found id: ""
	I1213 19:39:49.526388  195912 logs.go:282] 0 containers: []
	W1213 19:39:49.526398  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:49.526404  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:49.526462  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:49.557096  195912 cri.go:89] found id: ""
	I1213 19:39:49.557125  195912 logs.go:282] 0 containers: []
	W1213 19:39:49.557139  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:49.557153  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:49.557165  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:49.589763  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:49.589802  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:49.619091  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:49.619121  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:49.691148  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:49.691188  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:49.705933  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:49.705965  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:49.774450  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:52.275346  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:52.285423  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:52.285499  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:52.310880  195912 cri.go:89] found id: ""
	I1213 19:39:52.310903  195912 logs.go:282] 0 containers: []
	W1213 19:39:52.310911  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:52.310917  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:52.310975  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:52.344114  195912 cri.go:89] found id: ""
	I1213 19:39:52.344145  195912 logs.go:282] 0 containers: []
	W1213 19:39:52.344155  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:52.344162  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:52.344220  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:52.381485  195912 cri.go:89] found id: ""
	I1213 19:39:52.381510  195912 logs.go:282] 0 containers: []
	W1213 19:39:52.381519  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:52.381525  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:52.381582  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:52.409100  195912 cri.go:89] found id: ""
	I1213 19:39:52.409123  195912 logs.go:282] 0 containers: []
	W1213 19:39:52.409132  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:52.409139  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:52.409197  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:52.435284  195912 cri.go:89] found id: ""
	I1213 19:39:52.435310  195912 logs.go:282] 0 containers: []
	W1213 19:39:52.435318  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:52.435324  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:52.435385  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:52.461347  195912 cri.go:89] found id: ""
	I1213 19:39:52.461369  195912 logs.go:282] 0 containers: []
	W1213 19:39:52.461377  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:52.461383  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:52.461442  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:52.487471  195912 cri.go:89] found id: ""
	I1213 19:39:52.487509  195912 logs.go:282] 0 containers: []
	W1213 19:39:52.487517  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:52.487523  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:52.487589  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:52.512497  195912 cri.go:89] found id: ""
	I1213 19:39:52.512520  195912 logs.go:282] 0 containers: []
	W1213 19:39:52.512528  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:52.512537  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:52.512548  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:52.580046  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:52.580080  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:52.594494  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:52.594522  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:52.659500  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:52.659562  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:52.659593  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:52.690761  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:52.690792  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:55.222755  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:55.232742  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:55.232809  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:55.257937  195912 cri.go:89] found id: ""
	I1213 19:39:55.257964  195912 logs.go:282] 0 containers: []
	W1213 19:39:55.257973  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:55.257979  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:55.258035  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:55.283720  195912 cri.go:89] found id: ""
	I1213 19:39:55.283747  195912 logs.go:282] 0 containers: []
	W1213 19:39:55.283755  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:55.283762  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:55.283822  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:55.316025  195912 cri.go:89] found id: ""
	I1213 19:39:55.316050  195912 logs.go:282] 0 containers: []
	W1213 19:39:55.316058  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:55.316065  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:55.316123  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:55.350518  195912 cri.go:89] found id: ""
	I1213 19:39:55.350539  195912 logs.go:282] 0 containers: []
	W1213 19:39:55.350547  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:55.350559  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:55.350619  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:55.379256  195912 cri.go:89] found id: ""
	I1213 19:39:55.379278  195912 logs.go:282] 0 containers: []
	W1213 19:39:55.379286  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:55.379292  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:55.379350  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:55.408945  195912 cri.go:89] found id: ""
	I1213 19:39:55.408967  195912 logs.go:282] 0 containers: []
	W1213 19:39:55.408976  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:55.408982  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:55.409104  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:55.434395  195912 cri.go:89] found id: ""
	I1213 19:39:55.434421  195912 logs.go:282] 0 containers: []
	W1213 19:39:55.434430  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:55.434437  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:55.434495  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:55.459871  195912 cri.go:89] found id: ""
	I1213 19:39:55.459897  195912 logs.go:282] 0 containers: []
	W1213 19:39:55.459908  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:55.459918  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:55.459928  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:55.527367  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:55.527406  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:55.541714  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:55.541745  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:55.608063  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:39:55.608086  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:55.608099  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:55.639147  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:55.639181  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:58.172190  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:39:58.190928  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:39:58.191003  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:39:58.251022  195912 cri.go:89] found id: ""
	I1213 19:39:58.251051  195912 logs.go:282] 0 containers: []
	W1213 19:39:58.251059  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:39:58.251065  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:39:58.251130  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:39:58.298889  195912 cri.go:89] found id: ""
	I1213 19:39:58.298917  195912 logs.go:282] 0 containers: []
	W1213 19:39:58.298927  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:39:58.298933  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:39:58.298994  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:39:58.359331  195912 cri.go:89] found id: ""
	I1213 19:39:58.359360  195912 logs.go:282] 0 containers: []
	W1213 19:39:58.359369  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:39:58.359375  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:39:58.359433  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:39:58.413125  195912 cri.go:89] found id: ""
	I1213 19:39:58.413152  195912 logs.go:282] 0 containers: []
	W1213 19:39:58.413161  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:39:58.413174  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:39:58.413255  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:39:58.473985  195912 cri.go:89] found id: ""
	I1213 19:39:58.474013  195912 logs.go:282] 0 containers: []
	W1213 19:39:58.474021  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:39:58.474027  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:39:58.474101  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:39:58.540060  195912 cri.go:89] found id: ""
	I1213 19:39:58.540089  195912 logs.go:282] 0 containers: []
	W1213 19:39:58.540098  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:39:58.540104  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:39:58.540171  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:39:58.583296  195912 cri.go:89] found id: ""
	I1213 19:39:58.583335  195912 logs.go:282] 0 containers: []
	W1213 19:39:58.583344  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:39:58.583349  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:39:58.583410  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:39:58.633480  195912 cri.go:89] found id: ""
	I1213 19:39:58.633507  195912 logs.go:282] 0 containers: []
	W1213 19:39:58.633516  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:39:58.633526  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:39:58.633537  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:39:58.688499  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:39:58.688541  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:39:58.749274  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:39:58.749311  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:39:58.829726  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:39:58.829806  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:39:58.844851  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:39:58.844928  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:39:58.956243  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:01.456522  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:01.467210  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:01.467287  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:01.499958  195912 cri.go:89] found id: ""
	I1213 19:40:01.499981  195912 logs.go:282] 0 containers: []
	W1213 19:40:01.499990  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:01.499996  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:01.500054  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:01.529126  195912 cri.go:89] found id: ""
	I1213 19:40:01.529154  195912 logs.go:282] 0 containers: []
	W1213 19:40:01.529164  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:01.529171  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:01.529245  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:01.556804  195912 cri.go:89] found id: ""
	I1213 19:40:01.556829  195912 logs.go:282] 0 containers: []
	W1213 19:40:01.556838  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:01.556844  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:01.556908  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:01.583723  195912 cri.go:89] found id: ""
	I1213 19:40:01.583749  195912 logs.go:282] 0 containers: []
	W1213 19:40:01.583758  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:01.583764  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:01.583827  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:01.611556  195912 cri.go:89] found id: ""
	I1213 19:40:01.611583  195912 logs.go:282] 0 containers: []
	W1213 19:40:01.611592  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:01.611598  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:01.611675  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:01.639121  195912 cri.go:89] found id: ""
	I1213 19:40:01.639145  195912 logs.go:282] 0 containers: []
	W1213 19:40:01.639154  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:01.639160  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:01.639227  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:01.667478  195912 cri.go:89] found id: ""
	I1213 19:40:01.667503  195912 logs.go:282] 0 containers: []
	W1213 19:40:01.667512  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:01.667518  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:01.667581  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:01.694231  195912 cri.go:89] found id: ""
	I1213 19:40:01.694258  195912 logs.go:282] 0 containers: []
	W1213 19:40:01.694267  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:01.694276  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:01.694287  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:01.762590  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:01.762666  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:01.762695  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:01.794050  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:01.794086  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:01.831514  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:01.831595  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:01.915089  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:01.915127  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:04.432558  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:04.442778  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:04.442854  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:04.469268  195912 cri.go:89] found id: ""
	I1213 19:40:04.469291  195912 logs.go:282] 0 containers: []
	W1213 19:40:04.469299  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:04.469305  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:04.469366  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:04.494626  195912 cri.go:89] found id: ""
	I1213 19:40:04.494652  195912 logs.go:282] 0 containers: []
	W1213 19:40:04.494661  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:04.494667  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:04.494731  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:04.521612  195912 cri.go:89] found id: ""
	I1213 19:40:04.521640  195912 logs.go:282] 0 containers: []
	W1213 19:40:04.521649  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:04.521656  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:04.521720  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:04.547697  195912 cri.go:89] found id: ""
	I1213 19:40:04.547726  195912 logs.go:282] 0 containers: []
	W1213 19:40:04.547740  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:04.547747  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:04.547809  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:04.577121  195912 cri.go:89] found id: ""
	I1213 19:40:04.577146  195912 logs.go:282] 0 containers: []
	W1213 19:40:04.577154  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:04.577161  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:04.577224  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:04.603161  195912 cri.go:89] found id: ""
	I1213 19:40:04.603185  195912 logs.go:282] 0 containers: []
	W1213 19:40:04.603194  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:04.603201  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:04.603269  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:04.631760  195912 cri.go:89] found id: ""
	I1213 19:40:04.631788  195912 logs.go:282] 0 containers: []
	W1213 19:40:04.631797  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:04.631803  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:04.631862  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:04.659132  195912 cri.go:89] found id: ""
	I1213 19:40:04.659155  195912 logs.go:282] 0 containers: []
	W1213 19:40:04.659164  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:04.659172  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:04.659183  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:04.728601  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:04.728638  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:04.744190  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:04.744218  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:04.806560  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:04.806584  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:04.806597  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:04.838933  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:04.838970  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:07.381594  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:07.391754  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:07.391822  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:07.417536  195912 cri.go:89] found id: ""
	I1213 19:40:07.417560  195912 logs.go:282] 0 containers: []
	W1213 19:40:07.417569  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:07.417575  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:07.417635  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:07.442914  195912 cri.go:89] found id: ""
	I1213 19:40:07.442980  195912 logs.go:282] 0 containers: []
	W1213 19:40:07.442995  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:07.443002  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:07.443061  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:07.468981  195912 cri.go:89] found id: ""
	I1213 19:40:07.469087  195912 logs.go:282] 0 containers: []
	W1213 19:40:07.469100  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:07.469106  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:07.469172  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:07.495299  195912 cri.go:89] found id: ""
	I1213 19:40:07.495325  195912 logs.go:282] 0 containers: []
	W1213 19:40:07.495341  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:07.495348  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:07.495417  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:07.525468  195912 cri.go:89] found id: ""
	I1213 19:40:07.525493  195912 logs.go:282] 0 containers: []
	W1213 19:40:07.525502  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:07.525511  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:07.525593  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:07.551756  195912 cri.go:89] found id: ""
	I1213 19:40:07.551783  195912 logs.go:282] 0 containers: []
	W1213 19:40:07.551793  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:07.551800  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:07.551862  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:07.579139  195912 cri.go:89] found id: ""
	I1213 19:40:07.579164  195912 logs.go:282] 0 containers: []
	W1213 19:40:07.579172  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:07.579178  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:07.579242  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:07.605323  195912 cri.go:89] found id: ""
	I1213 19:40:07.605347  195912 logs.go:282] 0 containers: []
	W1213 19:40:07.605355  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:07.605364  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:07.605375  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:07.637600  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:07.637677  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:07.669876  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:07.669953  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:07.742679  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:07.742728  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:07.757100  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:07.757127  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:07.885107  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:10.386369  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:10.396795  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:10.396861  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:10.425871  195912 cri.go:89] found id: ""
	I1213 19:40:10.425896  195912 logs.go:282] 0 containers: []
	W1213 19:40:10.425905  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:10.425911  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:10.426028  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:10.453093  195912 cri.go:89] found id: ""
	I1213 19:40:10.453167  195912 logs.go:282] 0 containers: []
	W1213 19:40:10.453190  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:10.453207  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:10.453272  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:10.485474  195912 cri.go:89] found id: ""
	I1213 19:40:10.485499  195912 logs.go:282] 0 containers: []
	W1213 19:40:10.485508  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:10.485514  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:10.485620  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:10.512418  195912 cri.go:89] found id: ""
	I1213 19:40:10.512443  195912 logs.go:282] 0 containers: []
	W1213 19:40:10.512452  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:10.512459  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:10.512524  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:10.544254  195912 cri.go:89] found id: ""
	I1213 19:40:10.544280  195912 logs.go:282] 0 containers: []
	W1213 19:40:10.544289  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:10.544295  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:10.544357  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:10.574740  195912 cri.go:89] found id: ""
	I1213 19:40:10.574764  195912 logs.go:282] 0 containers: []
	W1213 19:40:10.574773  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:10.574779  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:10.574857  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:10.600039  195912 cri.go:89] found id: ""
	I1213 19:40:10.600065  195912 logs.go:282] 0 containers: []
	W1213 19:40:10.600073  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:10.600079  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:10.600138  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:10.626138  195912 cri.go:89] found id: ""
	I1213 19:40:10.626164  195912 logs.go:282] 0 containers: []
	W1213 19:40:10.626173  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:10.626182  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:10.626193  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:10.694804  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:10.694841  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:10.711491  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:10.711523  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:10.779579  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:10.779599  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:10.779611  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:10.811206  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:10.811243  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:13.352731  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:13.362792  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:13.362904  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:13.389079  195912 cri.go:89] found id: ""
	I1213 19:40:13.389109  195912 logs.go:282] 0 containers: []
	W1213 19:40:13.389118  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:13.389125  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:13.389201  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:13.418414  195912 cri.go:89] found id: ""
	I1213 19:40:13.418482  195912 logs.go:282] 0 containers: []
	W1213 19:40:13.418497  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:13.418504  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:13.418565  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:13.444896  195912 cri.go:89] found id: ""
	I1213 19:40:13.444922  195912 logs.go:282] 0 containers: []
	W1213 19:40:13.444931  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:13.444937  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:13.444997  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:13.470387  195912 cri.go:89] found id: ""
	I1213 19:40:13.470413  195912 logs.go:282] 0 containers: []
	W1213 19:40:13.470423  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:13.470429  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:13.470488  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:13.496438  195912 cri.go:89] found id: ""
	I1213 19:40:13.496463  195912 logs.go:282] 0 containers: []
	W1213 19:40:13.496472  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:13.496478  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:13.496535  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:13.527209  195912 cri.go:89] found id: ""
	I1213 19:40:13.527243  195912 logs.go:282] 0 containers: []
	W1213 19:40:13.527253  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:13.527260  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:13.527334  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:13.553219  195912 cri.go:89] found id: ""
	I1213 19:40:13.553246  195912 logs.go:282] 0 containers: []
	W1213 19:40:13.553256  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:13.553262  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:13.553326  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:13.579574  195912 cri.go:89] found id: ""
	I1213 19:40:13.579601  195912 logs.go:282] 0 containers: []
	W1213 19:40:13.579611  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:13.579620  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:13.579633  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:13.617277  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:13.617307  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:13.687722  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:13.687757  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:13.702215  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:13.702280  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:13.772363  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:13.772385  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:13.772398  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:16.304881  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:16.314841  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:16.314910  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:16.340112  195912 cri.go:89] found id: ""
	I1213 19:40:16.340135  195912 logs.go:282] 0 containers: []
	W1213 19:40:16.340143  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:16.340149  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:16.340209  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:16.365345  195912 cri.go:89] found id: ""
	I1213 19:40:16.365368  195912 logs.go:282] 0 containers: []
	W1213 19:40:16.365376  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:16.365383  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:16.365441  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:16.395612  195912 cri.go:89] found id: ""
	I1213 19:40:16.395632  195912 logs.go:282] 0 containers: []
	W1213 19:40:16.395640  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:16.395649  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:16.395707  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:16.421312  195912 cri.go:89] found id: ""
	I1213 19:40:16.421341  195912 logs.go:282] 0 containers: []
	W1213 19:40:16.421350  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:16.421356  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:16.421413  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:16.448679  195912 cri.go:89] found id: ""
	I1213 19:40:16.448705  195912 logs.go:282] 0 containers: []
	W1213 19:40:16.448714  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:16.448720  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:16.448782  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:16.474842  195912 cri.go:89] found id: ""
	I1213 19:40:16.474870  195912 logs.go:282] 0 containers: []
	W1213 19:40:16.474879  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:16.474885  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:16.474945  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:16.503979  195912 cri.go:89] found id: ""
	I1213 19:40:16.504001  195912 logs.go:282] 0 containers: []
	W1213 19:40:16.504009  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:16.504015  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:16.504078  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:16.528994  195912 cri.go:89] found id: ""
	I1213 19:40:16.529035  195912 logs.go:282] 0 containers: []
	W1213 19:40:16.529044  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:16.529052  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:16.529064  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:16.557546  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:16.557574  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:16.628745  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:16.628780  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:16.643276  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:16.643310  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:16.709880  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:16.709904  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:16.709917  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:19.242571  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:19.252311  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:19.252383  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:19.278743  195912 cri.go:89] found id: ""
	I1213 19:40:19.278771  195912 logs.go:282] 0 containers: []
	W1213 19:40:19.278798  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:19.278804  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:19.278877  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:19.305271  195912 cri.go:89] found id: ""
	I1213 19:40:19.305294  195912 logs.go:282] 0 containers: []
	W1213 19:40:19.305302  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:19.305309  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:19.305366  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:19.331143  195912 cri.go:89] found id: ""
	I1213 19:40:19.331169  195912 logs.go:282] 0 containers: []
	W1213 19:40:19.331177  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:19.331184  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:19.331242  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:19.356125  195912 cri.go:89] found id: ""
	I1213 19:40:19.356146  195912 logs.go:282] 0 containers: []
	W1213 19:40:19.356158  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:19.356165  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:19.356225  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:19.384391  195912 cri.go:89] found id: ""
	I1213 19:40:19.384413  195912 logs.go:282] 0 containers: []
	W1213 19:40:19.384421  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:19.384427  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:19.384483  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:19.410385  195912 cri.go:89] found id: ""
	I1213 19:40:19.410413  195912 logs.go:282] 0 containers: []
	W1213 19:40:19.410422  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:19.410428  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:19.410489  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:19.434871  195912 cri.go:89] found id: ""
	I1213 19:40:19.434897  195912 logs.go:282] 0 containers: []
	W1213 19:40:19.434916  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:19.434939  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:19.435016  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:19.463607  195912 cri.go:89] found id: ""
	I1213 19:40:19.463629  195912 logs.go:282] 0 containers: []
	W1213 19:40:19.463637  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:19.463646  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:19.463658  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:19.493798  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:19.493828  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:19.564737  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:19.564773  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:19.578873  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:19.578906  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:19.645734  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:19.645794  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:19.645814  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:22.177157  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:22.187171  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:22.187241  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:22.212846  195912 cri.go:89] found id: ""
	I1213 19:40:22.212873  195912 logs.go:282] 0 containers: []
	W1213 19:40:22.212882  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:22.212889  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:22.212945  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:22.240272  195912 cri.go:89] found id: ""
	I1213 19:40:22.240297  195912 logs.go:282] 0 containers: []
	W1213 19:40:22.240306  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:22.240312  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:22.240373  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:22.265976  195912 cri.go:89] found id: ""
	I1213 19:40:22.266053  195912 logs.go:282] 0 containers: []
	W1213 19:40:22.266069  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:22.266076  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:22.266150  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:22.290812  195912 cri.go:89] found id: ""
	I1213 19:40:22.290835  195912 logs.go:282] 0 containers: []
	W1213 19:40:22.290843  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:22.290849  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:22.290910  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:22.318027  195912 cri.go:89] found id: ""
	I1213 19:40:22.318056  195912 logs.go:282] 0 containers: []
	W1213 19:40:22.318065  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:22.318071  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:22.318157  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:22.344775  195912 cri.go:89] found id: ""
	I1213 19:40:22.344809  195912 logs.go:282] 0 containers: []
	W1213 19:40:22.344818  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:22.344830  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:22.344899  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:22.371659  195912 cri.go:89] found id: ""
	I1213 19:40:22.371693  195912 logs.go:282] 0 containers: []
	W1213 19:40:22.371701  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:22.371708  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:22.371776  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:22.397060  195912 cri.go:89] found id: ""
	I1213 19:40:22.397087  195912 logs.go:282] 0 containers: []
	W1213 19:40:22.397096  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:22.397105  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:22.397116  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:22.427322  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:22.427360  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:22.454593  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:22.454621  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:22.522478  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:22.522512  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:22.536612  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:22.536639  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:22.603129  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:25.103433  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:25.116647  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:25.116741  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:25.148936  195912 cri.go:89] found id: ""
	I1213 19:40:25.148964  195912 logs.go:282] 0 containers: []
	W1213 19:40:25.148974  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:25.148981  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:25.149167  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:25.178693  195912 cri.go:89] found id: ""
	I1213 19:40:25.178715  195912 logs.go:282] 0 containers: []
	W1213 19:40:25.178723  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:25.178730  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:25.178797  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:25.205193  195912 cri.go:89] found id: ""
	I1213 19:40:25.205216  195912 logs.go:282] 0 containers: []
	W1213 19:40:25.205225  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:25.205231  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:25.205289  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:25.231967  195912 cri.go:89] found id: ""
	I1213 19:40:25.231989  195912 logs.go:282] 0 containers: []
	W1213 19:40:25.231998  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:25.232004  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:25.232065  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:25.257757  195912 cri.go:89] found id: ""
	I1213 19:40:25.257781  195912 logs.go:282] 0 containers: []
	W1213 19:40:25.257790  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:25.257802  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:25.257860  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:25.286753  195912 cri.go:89] found id: ""
	I1213 19:40:25.286780  195912 logs.go:282] 0 containers: []
	W1213 19:40:25.286790  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:25.286796  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:25.286855  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:25.317298  195912 cri.go:89] found id: ""
	I1213 19:40:25.317321  195912 logs.go:282] 0 containers: []
	W1213 19:40:25.317329  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:25.317335  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:25.317396  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:25.342407  195912 cri.go:89] found id: ""
	I1213 19:40:25.342429  195912 logs.go:282] 0 containers: []
	W1213 19:40:25.342438  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:25.342449  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:25.342471  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:25.372341  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:25.372421  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:25.441049  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:25.441085  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:25.455813  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:25.455965  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:25.520362  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:25.520384  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:25.520397  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:28.051611  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:28.064187  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:28.064265  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:28.103889  195912 cri.go:89] found id: ""
	I1213 19:40:28.103915  195912 logs.go:282] 0 containers: []
	W1213 19:40:28.103924  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:28.103930  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:28.103989  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:28.131471  195912 cri.go:89] found id: ""
	I1213 19:40:28.131499  195912 logs.go:282] 0 containers: []
	W1213 19:40:28.131507  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:28.131513  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:28.131574  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:28.160802  195912 cri.go:89] found id: ""
	I1213 19:40:28.160828  195912 logs.go:282] 0 containers: []
	W1213 19:40:28.160837  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:28.160843  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:28.160904  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:28.186273  195912 cri.go:89] found id: ""
	I1213 19:40:28.186349  195912 logs.go:282] 0 containers: []
	W1213 19:40:28.186365  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:28.186372  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:28.186440  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:28.211467  195912 cri.go:89] found id: ""
	I1213 19:40:28.211495  195912 logs.go:282] 0 containers: []
	W1213 19:40:28.211505  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:28.211512  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:28.211570  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:28.237358  195912 cri.go:89] found id: ""
	I1213 19:40:28.237384  195912 logs.go:282] 0 containers: []
	W1213 19:40:28.237393  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:28.237399  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:28.237456  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:28.264190  195912 cri.go:89] found id: ""
	I1213 19:40:28.264215  195912 logs.go:282] 0 containers: []
	W1213 19:40:28.264225  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:28.264232  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:28.264292  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:28.292359  195912 cri.go:89] found id: ""
	I1213 19:40:28.292385  195912 logs.go:282] 0 containers: []
	W1213 19:40:28.292394  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:28.292403  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:28.292420  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:28.368586  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:28.368631  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:28.383923  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:28.383985  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:28.454774  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:28.454836  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:28.454857  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:28.486655  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:28.486692  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:31.024171  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:31.034945  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:31.035024  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:31.062926  195912 cri.go:89] found id: ""
	I1213 19:40:31.062955  195912 logs.go:282] 0 containers: []
	W1213 19:40:31.062965  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:31.062971  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:31.063056  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:31.095057  195912 cri.go:89] found id: ""
	I1213 19:40:31.095084  195912 logs.go:282] 0 containers: []
	W1213 19:40:31.095093  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:31.095099  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:31.095160  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:31.130654  195912 cri.go:89] found id: ""
	I1213 19:40:31.130685  195912 logs.go:282] 0 containers: []
	W1213 19:40:31.130693  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:31.130699  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:31.130760  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:31.160657  195912 cri.go:89] found id: ""
	I1213 19:40:31.160687  195912 logs.go:282] 0 containers: []
	W1213 19:40:31.160697  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:31.160704  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:31.160765  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:31.187382  195912 cri.go:89] found id: ""
	I1213 19:40:31.187408  195912 logs.go:282] 0 containers: []
	W1213 19:40:31.187417  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:31.187423  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:31.187481  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:31.213150  195912 cri.go:89] found id: ""
	I1213 19:40:31.213178  195912 logs.go:282] 0 containers: []
	W1213 19:40:31.213187  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:31.213193  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:31.213252  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:31.239960  195912 cri.go:89] found id: ""
	I1213 19:40:31.239983  195912 logs.go:282] 0 containers: []
	W1213 19:40:31.239992  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:31.239999  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:31.240055  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:31.266547  195912 cri.go:89] found id: ""
	I1213 19:40:31.266574  195912 logs.go:282] 0 containers: []
	W1213 19:40:31.266593  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:31.266603  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:31.266615  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:31.334060  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:31.334096  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:31.348310  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:31.348338  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:31.417713  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:31.417736  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:31.417748  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:31.448704  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:31.448739  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:33.979075  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:33.991034  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:33.991111  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:34.030286  195912 cri.go:89] found id: ""
	I1213 19:40:34.030308  195912 logs.go:282] 0 containers: []
	W1213 19:40:34.030317  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:34.030323  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:34.030384  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:34.066761  195912 cri.go:89] found id: ""
	I1213 19:40:34.066785  195912 logs.go:282] 0 containers: []
	W1213 19:40:34.066794  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:34.066800  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:34.066864  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:34.136473  195912 cri.go:89] found id: ""
	I1213 19:40:34.136497  195912 logs.go:282] 0 containers: []
	W1213 19:40:34.136506  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:34.136512  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:34.136579  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:34.182634  195912 cri.go:89] found id: ""
	I1213 19:40:34.182657  195912 logs.go:282] 0 containers: []
	W1213 19:40:34.182666  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:34.182672  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:34.182731  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:34.236719  195912 cri.go:89] found id: ""
	I1213 19:40:34.236742  195912 logs.go:282] 0 containers: []
	W1213 19:40:34.236750  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:34.236818  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:34.236908  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:34.283998  195912 cri.go:89] found id: ""
	I1213 19:40:34.284073  195912 logs.go:282] 0 containers: []
	W1213 19:40:34.284097  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:34.284117  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:34.284198  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:34.319334  195912 cri.go:89] found id: ""
	I1213 19:40:34.319362  195912 logs.go:282] 0 containers: []
	W1213 19:40:34.319372  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:34.319378  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:34.319434  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:34.368375  195912 cri.go:89] found id: ""
	I1213 19:40:34.368401  195912 logs.go:282] 0 containers: []
	W1213 19:40:34.368410  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:34.368420  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:34.368432  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:34.450688  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:34.450791  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:34.471984  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:34.472013  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:34.553485  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:34.553507  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:34.553520  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:34.585299  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:34.585334  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:37.117130  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:37.133452  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:37.133526  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:37.190193  195912 cri.go:89] found id: ""
	I1213 19:40:37.190219  195912 logs.go:282] 0 containers: []
	W1213 19:40:37.190227  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:37.190233  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:37.190291  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:37.227453  195912 cri.go:89] found id: ""
	I1213 19:40:37.227479  195912 logs.go:282] 0 containers: []
	W1213 19:40:37.227488  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:37.227494  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:37.227558  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:37.263136  195912 cri.go:89] found id: ""
	I1213 19:40:37.263164  195912 logs.go:282] 0 containers: []
	W1213 19:40:37.263174  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:37.263180  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:37.263239  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:37.301979  195912 cri.go:89] found id: ""
	I1213 19:40:37.302004  195912 logs.go:282] 0 containers: []
	W1213 19:40:37.302019  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:37.302032  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:37.302094  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:37.338139  195912 cri.go:89] found id: ""
	I1213 19:40:37.338161  195912 logs.go:282] 0 containers: []
	W1213 19:40:37.338169  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:37.338177  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:37.338236  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:37.377082  195912 cri.go:89] found id: ""
	I1213 19:40:37.377103  195912 logs.go:282] 0 containers: []
	W1213 19:40:37.377111  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:37.377118  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:37.377176  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:37.406144  195912 cri.go:89] found id: ""
	I1213 19:40:37.406165  195912 logs.go:282] 0 containers: []
	W1213 19:40:37.406174  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:37.406180  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:37.406237  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:37.452543  195912 cri.go:89] found id: ""
	I1213 19:40:37.452565  195912 logs.go:282] 0 containers: []
	W1213 19:40:37.452573  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:37.452582  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:37.452593  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:37.469962  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:37.469988  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:37.564202  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:37.564269  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:37.564298  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:37.601298  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:37.601327  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:37.637330  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:37.637404  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:40.218835  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:40.228826  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:40.228898  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:40.254914  195912 cri.go:89] found id: ""
	I1213 19:40:40.254937  195912 logs.go:282] 0 containers: []
	W1213 19:40:40.254946  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:40.254952  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:40.255008  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:40.280326  195912 cri.go:89] found id: ""
	I1213 19:40:40.280351  195912 logs.go:282] 0 containers: []
	W1213 19:40:40.280360  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:40.280366  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:40.280425  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:40.309072  195912 cri.go:89] found id: ""
	I1213 19:40:40.309096  195912 logs.go:282] 0 containers: []
	W1213 19:40:40.309105  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:40.309111  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:40.309168  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:40.334205  195912 cri.go:89] found id: ""
	I1213 19:40:40.334229  195912 logs.go:282] 0 containers: []
	W1213 19:40:40.334238  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:40.334244  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:40.334301  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:40.358720  195912 cri.go:89] found id: ""
	I1213 19:40:40.358744  195912 logs.go:282] 0 containers: []
	W1213 19:40:40.358752  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:40.358759  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:40.358813  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:40.390058  195912 cri.go:89] found id: ""
	I1213 19:40:40.390081  195912 logs.go:282] 0 containers: []
	W1213 19:40:40.390090  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:40.390096  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:40.390157  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:40.414891  195912 cri.go:89] found id: ""
	I1213 19:40:40.414961  195912 logs.go:282] 0 containers: []
	W1213 19:40:40.414993  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:40.415014  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:40.415107  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:40.441413  195912 cri.go:89] found id: ""
	I1213 19:40:40.441439  195912 logs.go:282] 0 containers: []
	W1213 19:40:40.441448  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:40.441457  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:40.441468  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:40.472942  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:40.472975  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:40.540015  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:40.540049  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:40.554457  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:40.554500  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:40.633517  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:40.633540  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:40.633561  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:43.178310  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:43.188234  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:43.188306  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:43.215065  195912 cri.go:89] found id: ""
	I1213 19:40:43.215089  195912 logs.go:282] 0 containers: []
	W1213 19:40:43.215097  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:43.215103  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:43.215166  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:43.242274  195912 cri.go:89] found id: ""
	I1213 19:40:43.242300  195912 logs.go:282] 0 containers: []
	W1213 19:40:43.242310  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:43.242317  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:43.242378  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:43.268072  195912 cri.go:89] found id: ""
	I1213 19:40:43.268099  195912 logs.go:282] 0 containers: []
	W1213 19:40:43.268108  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:43.268115  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:43.268178  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:43.294917  195912 cri.go:89] found id: ""
	I1213 19:40:43.294943  195912 logs.go:282] 0 containers: []
	W1213 19:40:43.294952  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:43.294959  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:43.295019  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:43.325091  195912 cri.go:89] found id: ""
	I1213 19:40:43.325121  195912 logs.go:282] 0 containers: []
	W1213 19:40:43.325131  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:43.325137  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:43.325197  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:43.351274  195912 cri.go:89] found id: ""
	I1213 19:40:43.351296  195912 logs.go:282] 0 containers: []
	W1213 19:40:43.351304  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:43.351311  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:43.351366  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:43.379383  195912 cri.go:89] found id: ""
	I1213 19:40:43.379405  195912 logs.go:282] 0 containers: []
	W1213 19:40:43.379413  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:43.379421  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:43.379478  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:43.405882  195912 cri.go:89] found id: ""
	I1213 19:40:43.405905  195912 logs.go:282] 0 containers: []
	W1213 19:40:43.405912  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:43.405921  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:43.405935  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:43.437121  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:43.437160  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:43.470667  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:43.470693  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:43.540296  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:43.540337  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:43.554192  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:43.554218  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:43.622268  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:46.122485  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:46.134828  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:46.134903  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:46.162702  195912 cri.go:89] found id: ""
	I1213 19:40:46.162737  195912 logs.go:282] 0 containers: []
	W1213 19:40:46.162747  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:46.162753  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:46.162815  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:46.191621  195912 cri.go:89] found id: ""
	I1213 19:40:46.191645  195912 logs.go:282] 0 containers: []
	W1213 19:40:46.191653  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:46.191660  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:46.191717  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:46.219685  195912 cri.go:89] found id: ""
	I1213 19:40:46.219710  195912 logs.go:282] 0 containers: []
	W1213 19:40:46.219719  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:46.219725  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:46.219786  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:46.245254  195912 cri.go:89] found id: ""
	I1213 19:40:46.245277  195912 logs.go:282] 0 containers: []
	W1213 19:40:46.245295  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:46.245302  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:46.245361  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:46.274482  195912 cri.go:89] found id: ""
	I1213 19:40:46.274508  195912 logs.go:282] 0 containers: []
	W1213 19:40:46.274518  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:46.274524  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:46.274582  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:46.300551  195912 cri.go:89] found id: ""
	I1213 19:40:46.300576  195912 logs.go:282] 0 containers: []
	W1213 19:40:46.300584  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:46.300591  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:46.300649  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:46.326639  195912 cri.go:89] found id: ""
	I1213 19:40:46.326664  195912 logs.go:282] 0 containers: []
	W1213 19:40:46.326673  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:46.326679  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:46.326738  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:46.351433  195912 cri.go:89] found id: ""
	I1213 19:40:46.351462  195912 logs.go:282] 0 containers: []
	W1213 19:40:46.351471  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:46.351480  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:46.351491  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:46.420533  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:46.420556  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:46.420570  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:46.451870  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:46.451905  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:46.480041  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:46.480118  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:46.546815  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:46.546850  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:49.061755  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:49.071789  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:49.071859  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:49.109343  195912 cri.go:89] found id: ""
	I1213 19:40:49.109370  195912 logs.go:282] 0 containers: []
	W1213 19:40:49.109379  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:49.109385  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:49.109443  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:49.148093  195912 cri.go:89] found id: ""
	I1213 19:40:49.148119  195912 logs.go:282] 0 containers: []
	W1213 19:40:49.148127  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:49.148133  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:49.148193  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:49.175245  195912 cri.go:89] found id: ""
	I1213 19:40:49.175273  195912 logs.go:282] 0 containers: []
	W1213 19:40:49.175282  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:49.175288  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:49.175351  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:49.203800  195912 cri.go:89] found id: ""
	I1213 19:40:49.203828  195912 logs.go:282] 0 containers: []
	W1213 19:40:49.203844  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:49.203850  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:49.203910  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:49.231603  195912 cri.go:89] found id: ""
	I1213 19:40:49.231631  195912 logs.go:282] 0 containers: []
	W1213 19:40:49.231639  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:49.231645  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:49.231768  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:49.257739  195912 cri.go:89] found id: ""
	I1213 19:40:49.257765  195912 logs.go:282] 0 containers: []
	W1213 19:40:49.257774  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:49.257781  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:49.257845  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:49.283244  195912 cri.go:89] found id: ""
	I1213 19:40:49.283269  195912 logs.go:282] 0 containers: []
	W1213 19:40:49.283278  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:49.283284  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:49.283349  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:49.310406  195912 cri.go:89] found id: ""
	I1213 19:40:49.310485  195912 logs.go:282] 0 containers: []
	W1213 19:40:49.310512  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:49.310533  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:49.310550  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:49.325169  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:49.325200  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:49.388111  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:49.388129  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:49.388142  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:49.420392  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:49.420426  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:49.453458  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:49.453485  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:52.021154  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:52.031665  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:52.031734  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:52.057294  195912 cri.go:89] found id: ""
	I1213 19:40:52.057320  195912 logs.go:282] 0 containers: []
	W1213 19:40:52.057329  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:52.057336  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:52.057396  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:52.092981  195912 cri.go:89] found id: ""
	I1213 19:40:52.093022  195912 logs.go:282] 0 containers: []
	W1213 19:40:52.093033  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:52.093039  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:52.093104  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:52.127166  195912 cri.go:89] found id: ""
	I1213 19:40:52.127194  195912 logs.go:282] 0 containers: []
	W1213 19:40:52.127210  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:52.127218  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:52.127281  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:52.155525  195912 cri.go:89] found id: ""
	I1213 19:40:52.155546  195912 logs.go:282] 0 containers: []
	W1213 19:40:52.155554  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:52.155560  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:52.155621  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:52.180917  195912 cri.go:89] found id: ""
	I1213 19:40:52.180939  195912 logs.go:282] 0 containers: []
	W1213 19:40:52.180948  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:52.180954  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:52.181049  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:52.206226  195912 cri.go:89] found id: ""
	I1213 19:40:52.206252  195912 logs.go:282] 0 containers: []
	W1213 19:40:52.206261  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:52.206267  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:52.206326  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:52.231831  195912 cri.go:89] found id: ""
	I1213 19:40:52.231857  195912 logs.go:282] 0 containers: []
	W1213 19:40:52.231867  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:52.231873  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:52.231930  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:52.257762  195912 cri.go:89] found id: ""
	I1213 19:40:52.257785  195912 logs.go:282] 0 containers: []
	W1213 19:40:52.257794  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:52.257803  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:52.257814  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:52.287499  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:52.287527  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:52.356579  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:52.356614  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:52.371231  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:52.371259  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:52.434994  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:52.435054  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:52.435071  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:54.966537  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:54.979717  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:54.979798  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:55.029312  195912 cri.go:89] found id: ""
	I1213 19:40:55.029337  195912 logs.go:282] 0 containers: []
	W1213 19:40:55.029346  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:55.029353  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:55.029422  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:55.075903  195912 cri.go:89] found id: ""
	I1213 19:40:55.075930  195912 logs.go:282] 0 containers: []
	W1213 19:40:55.075940  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:55.075946  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:55.076009  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:55.188253  195912 cri.go:89] found id: ""
	I1213 19:40:55.188280  195912 logs.go:282] 0 containers: []
	W1213 19:40:55.188289  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:55.188295  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:55.188357  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:55.231173  195912 cri.go:89] found id: ""
	I1213 19:40:55.231211  195912 logs.go:282] 0 containers: []
	W1213 19:40:55.231222  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:55.231228  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:55.231289  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:55.262357  195912 cri.go:89] found id: ""
	I1213 19:40:55.262386  195912 logs.go:282] 0 containers: []
	W1213 19:40:55.262395  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:55.262401  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:55.262463  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:55.300759  195912 cri.go:89] found id: ""
	I1213 19:40:55.300787  195912 logs.go:282] 0 containers: []
	W1213 19:40:55.300796  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:55.300802  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:55.300862  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:55.333379  195912 cri.go:89] found id: ""
	I1213 19:40:55.333403  195912 logs.go:282] 0 containers: []
	W1213 19:40:55.333412  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:55.333418  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:55.333488  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:55.360836  195912 cri.go:89] found id: ""
	I1213 19:40:55.360859  195912 logs.go:282] 0 containers: []
	W1213 19:40:55.360868  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:55.360876  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:55.360888  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:55.389837  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:55.389869  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:55.458155  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:55.458193  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:40:55.474181  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:55.474210  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:55.538010  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:55.538032  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:55.538045  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:58.069707  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:40:58.086941  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:40:58.087006  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:40:58.131641  195912 cri.go:89] found id: ""
	I1213 19:40:58.131662  195912 logs.go:282] 0 containers: []
	W1213 19:40:58.131670  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:40:58.131676  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:40:58.131735  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:40:58.191617  195912 cri.go:89] found id: ""
	I1213 19:40:58.191637  195912 logs.go:282] 0 containers: []
	W1213 19:40:58.191646  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:40:58.191652  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:40:58.191709  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:40:58.229730  195912 cri.go:89] found id: ""
	I1213 19:40:58.229751  195912 logs.go:282] 0 containers: []
	W1213 19:40:58.229759  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:40:58.229764  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:40:58.229827  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:40:58.267859  195912 cri.go:89] found id: ""
	I1213 19:40:58.267882  195912 logs.go:282] 0 containers: []
	W1213 19:40:58.267890  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:40:58.267903  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:40:58.267962  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:40:58.303933  195912 cri.go:89] found id: ""
	I1213 19:40:58.303959  195912 logs.go:282] 0 containers: []
	W1213 19:40:58.303969  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:40:58.303975  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:40:58.304033  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:40:58.332677  195912 cri.go:89] found id: ""
	I1213 19:40:58.332702  195912 logs.go:282] 0 containers: []
	W1213 19:40:58.332711  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:40:58.332717  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:40:58.332771  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:40:58.366247  195912 cri.go:89] found id: ""
	I1213 19:40:58.366264  195912 logs.go:282] 0 containers: []
	W1213 19:40:58.366273  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:40:58.366291  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:40:58.366353  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:40:58.407336  195912 cri.go:89] found id: ""
	I1213 19:40:58.407368  195912 logs.go:282] 0 containers: []
	W1213 19:40:58.407377  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:40:58.407387  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:40:58.407398  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:40:58.507914  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:40:58.507937  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:40:58.507950  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:40:58.544646  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:40:58.544687  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:40:58.587729  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:40:58.587760  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:40:58.668244  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:40:58.668280  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:41:01.183763  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:41:01.195948  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:41:01.196036  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:41:01.235641  195912 cri.go:89] found id: ""
	I1213 19:41:01.235662  195912 logs.go:282] 0 containers: []
	W1213 19:41:01.235671  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:41:01.235677  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:41:01.235736  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:41:01.267009  195912 cri.go:89] found id: ""
	I1213 19:41:01.267033  195912 logs.go:282] 0 containers: []
	W1213 19:41:01.267042  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:41:01.267049  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:41:01.267122  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:41:01.309625  195912 cri.go:89] found id: ""
	I1213 19:41:01.309647  195912 logs.go:282] 0 containers: []
	W1213 19:41:01.309655  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:41:01.309662  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:41:01.309721  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:41:01.343089  195912 cri.go:89] found id: ""
	I1213 19:41:01.343109  195912 logs.go:282] 0 containers: []
	W1213 19:41:01.343118  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:41:01.343125  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:41:01.343191  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:41:01.382027  195912 cri.go:89] found id: ""
	I1213 19:41:01.382049  195912 logs.go:282] 0 containers: []
	W1213 19:41:01.382057  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:41:01.382063  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:41:01.382122  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:41:01.410325  195912 cri.go:89] found id: ""
	I1213 19:41:01.410345  195912 logs.go:282] 0 containers: []
	W1213 19:41:01.410353  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:41:01.410359  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:41:01.410417  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:41:01.437880  195912 cri.go:89] found id: ""
	I1213 19:41:01.437903  195912 logs.go:282] 0 containers: []
	W1213 19:41:01.437911  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:41:01.437918  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:41:01.437976  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:41:01.466879  195912 cri.go:89] found id: ""
	I1213 19:41:01.466900  195912 logs.go:282] 0 containers: []
	W1213 19:41:01.466909  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:41:01.466920  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:41:01.466932  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:41:01.550809  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:41:01.550883  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:41:01.565879  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:41:01.565964  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:41:01.636513  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:41:01.636589  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:41:01.636614  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:41:01.669061  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:41:01.669096  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:41:04.200251  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:41:04.211601  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:41:04.211690  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:41:04.239422  195912 cri.go:89] found id: ""
	I1213 19:41:04.239448  195912 logs.go:282] 0 containers: []
	W1213 19:41:04.239456  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:41:04.239463  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:41:04.239525  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:41:04.268229  195912 cri.go:89] found id: ""
	I1213 19:41:04.268254  195912 logs.go:282] 0 containers: []
	W1213 19:41:04.268262  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:41:04.268269  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:41:04.268328  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:41:04.296126  195912 cri.go:89] found id: ""
	I1213 19:41:04.296154  195912 logs.go:282] 0 containers: []
	W1213 19:41:04.296163  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:41:04.296169  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:41:04.296227  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:41:04.322799  195912 cri.go:89] found id: ""
	I1213 19:41:04.322826  195912 logs.go:282] 0 containers: []
	W1213 19:41:04.322836  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:41:04.322842  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:41:04.322902  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:41:04.349505  195912 cri.go:89] found id: ""
	I1213 19:41:04.349530  195912 logs.go:282] 0 containers: []
	W1213 19:41:04.349538  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:41:04.349545  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:41:04.349606  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:41:04.379645  195912 cri.go:89] found id: ""
	I1213 19:41:04.379667  195912 logs.go:282] 0 containers: []
	W1213 19:41:04.379676  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:41:04.379682  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:41:04.379739  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:41:04.405586  195912 cri.go:89] found id: ""
	I1213 19:41:04.405611  195912 logs.go:282] 0 containers: []
	W1213 19:41:04.405621  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:41:04.405626  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:41:04.405706  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:41:04.432645  195912 cri.go:89] found id: ""
	I1213 19:41:04.432671  195912 logs.go:282] 0 containers: []
	W1213 19:41:04.432681  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:41:04.432691  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:41:04.432713  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:41:04.496850  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:41:04.496870  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:41:04.496882  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:41:04.528303  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:41:04.528336  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:41:04.557779  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:41:04.557811  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:41:04.625986  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:41:04.626024  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:41:07.141135  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:41:07.158154  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:41:07.158225  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:41:07.209116  195912 cri.go:89] found id: ""
	I1213 19:41:07.209146  195912 logs.go:282] 0 containers: []
	W1213 19:41:07.209155  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:41:07.209161  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:41:07.209230  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:41:07.261831  195912 cri.go:89] found id: ""
	I1213 19:41:07.261858  195912 logs.go:282] 0 containers: []
	W1213 19:41:07.261867  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:41:07.261883  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:41:07.261945  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:41:07.296702  195912 cri.go:89] found id: ""
	I1213 19:41:07.296729  195912 logs.go:282] 0 containers: []
	W1213 19:41:07.296738  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:41:07.296745  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:41:07.296811  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:41:07.331326  195912 cri.go:89] found id: ""
	I1213 19:41:07.331353  195912 logs.go:282] 0 containers: []
	W1213 19:41:07.331362  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:41:07.331368  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:41:07.331425  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:41:07.359265  195912 cri.go:89] found id: ""
	I1213 19:41:07.359293  195912 logs.go:282] 0 containers: []
	W1213 19:41:07.359302  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:41:07.359308  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:41:07.359375  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:41:07.391256  195912 cri.go:89] found id: ""
	I1213 19:41:07.391284  195912 logs.go:282] 0 containers: []
	W1213 19:41:07.391293  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:41:07.391299  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:41:07.391355  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:41:07.419170  195912 cri.go:89] found id: ""
	I1213 19:41:07.419197  195912 logs.go:282] 0 containers: []
	W1213 19:41:07.419207  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:41:07.419213  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:41:07.419271  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:41:07.445621  195912 cri.go:89] found id: ""
	I1213 19:41:07.445643  195912 logs.go:282] 0 containers: []
	W1213 19:41:07.445651  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:41:07.445672  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:41:07.445689  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:41:07.521100  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:41:07.521174  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:41:07.539018  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:41:07.539159  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:41:07.632249  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:41:07.632313  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:41:07.632339  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:41:07.668049  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:41:07.668080  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:41:10.205385  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:41:10.215362  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:41:10.215433  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:41:10.240590  195912 cri.go:89] found id: ""
	I1213 19:41:10.240611  195912 logs.go:282] 0 containers: []
	W1213 19:41:10.240620  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:41:10.240626  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:41:10.240684  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:41:10.266257  195912 cri.go:89] found id: ""
	I1213 19:41:10.266281  195912 logs.go:282] 0 containers: []
	W1213 19:41:10.266290  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:41:10.266308  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:41:10.266368  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:41:10.290318  195912 cri.go:89] found id: ""
	I1213 19:41:10.290344  195912 logs.go:282] 0 containers: []
	W1213 19:41:10.290352  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:41:10.290358  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:41:10.290445  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:41:10.314880  195912 cri.go:89] found id: ""
	I1213 19:41:10.314905  195912 logs.go:282] 0 containers: []
	W1213 19:41:10.314915  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:41:10.314921  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:41:10.314981  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:41:10.338999  195912 cri.go:89] found id: ""
	I1213 19:41:10.339020  195912 logs.go:282] 0 containers: []
	W1213 19:41:10.339029  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:41:10.339034  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:41:10.339091  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:41:10.367085  195912 cri.go:89] found id: ""
	I1213 19:41:10.367112  195912 logs.go:282] 0 containers: []
	W1213 19:41:10.367128  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:41:10.367135  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:41:10.367199  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:41:10.394735  195912 cri.go:89] found id: ""
	I1213 19:41:10.394762  195912 logs.go:282] 0 containers: []
	W1213 19:41:10.394771  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:41:10.394777  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:41:10.394838  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:41:10.419269  195912 cri.go:89] found id: ""
	I1213 19:41:10.419294  195912 logs.go:282] 0 containers: []
	W1213 19:41:10.419303  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:41:10.419313  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:41:10.419324  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:41:10.499607  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:41:10.499663  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:41:10.513892  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:41:10.513923  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:41:10.577055  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:41:10.577084  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:41:10.577096  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:41:10.610022  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:41:10.610060  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:41:13.139247  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:41:13.149365  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:41:13.149449  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:41:13.180428  195912 cri.go:89] found id: ""
	I1213 19:41:13.180452  195912 logs.go:282] 0 containers: []
	W1213 19:41:13.180461  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:41:13.180467  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:41:13.180527  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:41:13.205505  195912 cri.go:89] found id: ""
	I1213 19:41:13.205531  195912 logs.go:282] 0 containers: []
	W1213 19:41:13.205539  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:41:13.205546  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:41:13.205603  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:41:13.230253  195912 cri.go:89] found id: ""
	I1213 19:41:13.230275  195912 logs.go:282] 0 containers: []
	W1213 19:41:13.230284  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:41:13.230290  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:41:13.230349  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:41:13.255614  195912 cri.go:89] found id: ""
	I1213 19:41:13.255640  195912 logs.go:282] 0 containers: []
	W1213 19:41:13.255649  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:41:13.255655  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:41:13.255765  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:41:13.282285  195912 cri.go:89] found id: ""
	I1213 19:41:13.282310  195912 logs.go:282] 0 containers: []
	W1213 19:41:13.282319  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:41:13.282326  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:41:13.282386  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:41:13.307938  195912 cri.go:89] found id: ""
	I1213 19:41:13.308002  195912 logs.go:282] 0 containers: []
	W1213 19:41:13.308026  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:41:13.308045  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:41:13.308125  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:41:13.337646  195912 cri.go:89] found id: ""
	I1213 19:41:13.337670  195912 logs.go:282] 0 containers: []
	W1213 19:41:13.337682  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:41:13.337688  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:41:13.337750  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:41:13.366952  195912 cri.go:89] found id: ""
	I1213 19:41:13.366978  195912 logs.go:282] 0 containers: []
	W1213 19:41:13.366987  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:41:13.366996  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:41:13.367009  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:41:13.380838  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:41:13.380873  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:41:13.446556  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:41:13.446580  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:41:13.446593  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:41:13.477529  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:41:13.477561  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:41:13.505831  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:41:13.505856  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:41:16.073688  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:41:16.085407  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:41:16.085484  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:41:16.123312  195912 cri.go:89] found id: ""
	I1213 19:41:16.123334  195912 logs.go:282] 0 containers: []
	W1213 19:41:16.123343  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:41:16.123349  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:41:16.123409  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:41:16.149621  195912 cri.go:89] found id: ""
	I1213 19:41:16.149646  195912 logs.go:282] 0 containers: []
	W1213 19:41:16.149656  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:41:16.149662  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:41:16.149723  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:41:16.175740  195912 cri.go:89] found id: ""
	I1213 19:41:16.175765  195912 logs.go:282] 0 containers: []
	W1213 19:41:16.175774  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:41:16.175782  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:41:16.175845  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:41:16.201500  195912 cri.go:89] found id: ""
	I1213 19:41:16.201526  195912 logs.go:282] 0 containers: []
	W1213 19:41:16.201534  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:41:16.201540  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:41:16.201600  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:41:16.229807  195912 cri.go:89] found id: ""
	I1213 19:41:16.229832  195912 logs.go:282] 0 containers: []
	W1213 19:41:16.229843  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:41:16.229849  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:41:16.229909  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:41:16.255069  195912 cri.go:89] found id: ""
	I1213 19:41:16.255099  195912 logs.go:282] 0 containers: []
	W1213 19:41:16.255108  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:41:16.255116  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:41:16.255194  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:41:16.279851  195912 cri.go:89] found id: ""
	I1213 19:41:16.279876  195912 logs.go:282] 0 containers: []
	W1213 19:41:16.279885  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:41:16.279891  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:41:16.280007  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:41:16.306366  195912 cri.go:89] found id: ""
	I1213 19:41:16.306392  195912 logs.go:282] 0 containers: []
	W1213 19:41:16.306402  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:41:16.306410  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:41:16.306424  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:41:16.376123  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:41:16.376158  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:41:16.391748  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:41:16.391773  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:41:16.460601  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:41:16.460620  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:41:16.460632  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:41:16.492539  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:41:16.492573  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:41:19.023322  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:41:19.033316  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:41:19.033385  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:41:19.058414  195912 cri.go:89] found id: ""
	I1213 19:41:19.058439  195912 logs.go:282] 0 containers: []
	W1213 19:41:19.058448  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:41:19.058454  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:41:19.058514  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:41:19.097391  195912 cri.go:89] found id: ""
	I1213 19:41:19.097421  195912 logs.go:282] 0 containers: []
	W1213 19:41:19.097431  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:41:19.097437  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:41:19.097501  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:41:19.130698  195912 cri.go:89] found id: ""
	I1213 19:41:19.130725  195912 logs.go:282] 0 containers: []
	W1213 19:41:19.130735  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:41:19.130741  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:41:19.130800  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:41:19.160002  195912 cri.go:89] found id: ""
	I1213 19:41:19.160029  195912 logs.go:282] 0 containers: []
	W1213 19:41:19.160038  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:41:19.160044  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:41:19.160118  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:41:19.191915  195912 cri.go:89] found id: ""
	I1213 19:41:19.191942  195912 logs.go:282] 0 containers: []
	W1213 19:41:19.191951  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:41:19.191957  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:41:19.192019  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:41:19.217939  195912 cri.go:89] found id: ""
	I1213 19:41:19.217968  195912 logs.go:282] 0 containers: []
	W1213 19:41:19.217977  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:41:19.217984  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:41:19.218048  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:41:19.248868  195912 cri.go:89] found id: ""
	I1213 19:41:19.248891  195912 logs.go:282] 0 containers: []
	W1213 19:41:19.248900  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:41:19.248906  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:41:19.248964  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:41:19.274271  195912 cri.go:89] found id: ""
	I1213 19:41:19.274295  195912 logs.go:282] 0 containers: []
	W1213 19:41:19.274304  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:41:19.274313  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:41:19.274329  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:41:19.345155  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:41:19.345193  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:41:19.359360  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:41:19.359385  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:41:19.428909  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:41:19.428935  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:41:19.428947  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:41:19.460192  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:41:19.460224  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:41:21.990357  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:41:22.001768  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:41:22.001844  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:41:22.040155  195912 cri.go:89] found id: ""
	I1213 19:41:22.040181  195912 logs.go:282] 0 containers: []
	W1213 19:41:22.040190  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:41:22.040196  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:41:22.040257  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:41:22.071442  195912 cri.go:89] found id: ""
	I1213 19:41:22.071469  195912 logs.go:282] 0 containers: []
	W1213 19:41:22.071479  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:41:22.071486  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:41:22.071550  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:41:22.166534  195912 cri.go:89] found id: ""
	I1213 19:41:22.166557  195912 logs.go:282] 0 containers: []
	W1213 19:41:22.166565  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:41:22.166571  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:41:22.166629  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:41:22.202160  195912 cri.go:89] found id: ""
	I1213 19:41:22.202181  195912 logs.go:282] 0 containers: []
	W1213 19:41:22.202190  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:41:22.202197  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:41:22.202253  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:41:22.239464  195912 cri.go:89] found id: ""
	I1213 19:41:22.239489  195912 logs.go:282] 0 containers: []
	W1213 19:41:22.239499  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:41:22.239505  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:41:22.239565  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:41:22.288879  195912 cri.go:89] found id: ""
	I1213 19:41:22.288902  195912 logs.go:282] 0 containers: []
	W1213 19:41:22.288910  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:41:22.288917  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:41:22.289002  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:41:22.319372  195912 cri.go:89] found id: ""
	I1213 19:41:22.319400  195912 logs.go:282] 0 containers: []
	W1213 19:41:22.319410  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:41:22.319416  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:41:22.319474  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:41:22.355145  195912 cri.go:89] found id: ""
	I1213 19:41:22.355170  195912 logs.go:282] 0 containers: []
	W1213 19:41:22.355181  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:41:22.355190  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:41:22.355201  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:41:22.431984  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:41:22.432061  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:41:22.452717  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:41:22.452792  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:41:22.548258  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:41:22.548319  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:41:22.548355  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:41:22.583283  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:41:22.583318  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:41:25.129186  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:41:25.139281  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:41:25.139354  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:41:25.165430  195912 cri.go:89] found id: ""
	I1213 19:41:25.165455  195912 logs.go:282] 0 containers: []
	W1213 19:41:25.165468  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:41:25.165475  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:41:25.165538  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:41:25.192146  195912 cri.go:89] found id: ""
	I1213 19:41:25.192172  195912 logs.go:282] 0 containers: []
	W1213 19:41:25.192181  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:41:25.192187  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:41:25.192248  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:41:25.219963  195912 cri.go:89] found id: ""
	I1213 19:41:25.219991  195912 logs.go:282] 0 containers: []
	W1213 19:41:25.220000  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:41:25.220007  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:41:25.220067  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:41:25.245676  195912 cri.go:89] found id: ""
	I1213 19:41:25.245703  195912 logs.go:282] 0 containers: []
	W1213 19:41:25.245713  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:41:25.245719  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:41:25.245782  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:41:25.278917  195912 cri.go:89] found id: ""
	I1213 19:41:25.278948  195912 logs.go:282] 0 containers: []
	W1213 19:41:25.278957  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:41:25.278963  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:41:25.279024  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:41:25.321729  195912 cri.go:89] found id: ""
	I1213 19:41:25.321755  195912 logs.go:282] 0 containers: []
	W1213 19:41:25.321765  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:41:25.321771  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:41:25.321831  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:41:25.358454  195912 cri.go:89] found id: ""
	I1213 19:41:25.358480  195912 logs.go:282] 0 containers: []
	W1213 19:41:25.358489  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:41:25.358496  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:41:25.358553  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:41:25.399735  195912 cri.go:89] found id: ""
	I1213 19:41:25.399764  195912 logs.go:282] 0 containers: []
	W1213 19:41:25.399773  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:41:25.399782  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:41:25.399795  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:41:25.479048  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:41:25.479119  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:41:25.494922  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:41:25.494951  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:41:25.578355  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:41:25.578377  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:41:25.578390  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:41:25.617350  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:41:25.617429  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:41:28.161154  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:41:28.171300  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:41:28.171373  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:41:28.200331  195912 cri.go:89] found id: ""
	I1213 19:41:28.200355  195912 logs.go:282] 0 containers: []
	W1213 19:41:28.200364  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:41:28.200376  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:41:28.200440  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:41:28.227396  195912 cri.go:89] found id: ""
	I1213 19:41:28.227423  195912 logs.go:282] 0 containers: []
	W1213 19:41:28.227432  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:41:28.227438  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:41:28.227498  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:41:28.253153  195912 cri.go:89] found id: ""
	I1213 19:41:28.253180  195912 logs.go:282] 0 containers: []
	W1213 19:41:28.253190  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:41:28.253196  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:41:28.253257  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:41:28.280326  195912 cri.go:89] found id: ""
	I1213 19:41:28.280349  195912 logs.go:282] 0 containers: []
	W1213 19:41:28.280358  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:41:28.280364  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:41:28.280424  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:41:28.307889  195912 cri.go:89] found id: ""
	I1213 19:41:28.307913  195912 logs.go:282] 0 containers: []
	W1213 19:41:28.307921  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:41:28.307930  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:41:28.307992  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:41:28.333504  195912 cri.go:89] found id: ""
	I1213 19:41:28.333526  195912 logs.go:282] 0 containers: []
	W1213 19:41:28.333535  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:41:28.333541  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:41:28.333599  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:41:28.359253  195912 cri.go:89] found id: ""
	I1213 19:41:28.359277  195912 logs.go:282] 0 containers: []
	W1213 19:41:28.359286  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:41:28.359292  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:41:28.359358  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:41:28.384102  195912 cri.go:89] found id: ""
	I1213 19:41:28.384127  195912 logs.go:282] 0 containers: []
	W1213 19:41:28.384136  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:41:28.384144  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:41:28.384160  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:41:28.451355  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:41:28.451391  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:41:28.465207  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:41:28.465238  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:41:28.536729  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:41:28.536751  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:41:28.536764  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:41:28.567999  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:41:28.568035  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:41:31.099834  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:41:31.114587  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:41:31.114673  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:41:31.152002  195912 cri.go:89] found id: ""
	I1213 19:41:31.152029  195912 logs.go:282] 0 containers: []
	W1213 19:41:31.152038  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:41:31.152045  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:41:31.152108  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:41:31.179733  195912 cri.go:89] found id: ""
	I1213 19:41:31.179756  195912 logs.go:282] 0 containers: []
	W1213 19:41:31.179764  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:41:31.179770  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:41:31.179830  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:41:31.206526  195912 cri.go:89] found id: ""
	I1213 19:41:31.206553  195912 logs.go:282] 0 containers: []
	W1213 19:41:31.206562  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:41:31.206569  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:41:31.206630  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:41:31.234299  195912 cri.go:89] found id: ""
	I1213 19:41:31.234325  195912 logs.go:282] 0 containers: []
	W1213 19:41:31.234333  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:41:31.234346  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:41:31.234409  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:41:31.260462  195912 cri.go:89] found id: ""
	I1213 19:41:31.260487  195912 logs.go:282] 0 containers: []
	W1213 19:41:31.260496  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:41:31.260502  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:41:31.260563  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:41:31.286534  195912 cri.go:89] found id: ""
	I1213 19:41:31.286560  195912 logs.go:282] 0 containers: []
	W1213 19:41:31.286572  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:41:31.286579  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:41:31.286646  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:41:31.315929  195912 cri.go:89] found id: ""
	I1213 19:41:31.315954  195912 logs.go:282] 0 containers: []
	W1213 19:41:31.315963  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:41:31.315972  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:41:31.316033  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:41:31.341700  195912 cri.go:89] found id: ""
	I1213 19:41:31.341727  195912 logs.go:282] 0 containers: []
	W1213 19:41:31.341736  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:41:31.341745  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:41:31.341757  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:41:31.408953  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:41:31.408988  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:41:31.423155  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:41:31.423190  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:41:31.490124  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:41:31.490189  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:41:31.490208  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:41:31.522503  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:41:31.522542  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:41:34.053622  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:41:34.067910  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:41:34.068010  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:41:34.106866  195912 cri.go:89] found id: ""
	I1213 19:41:34.106894  195912 logs.go:282] 0 containers: []
	W1213 19:41:34.106904  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:41:34.106910  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:41:34.106969  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:41:34.141664  195912 cri.go:89] found id: ""
	I1213 19:41:34.141688  195912 logs.go:282] 0 containers: []
	W1213 19:41:34.141697  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:41:34.141703  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:41:34.141760  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:41:34.172033  195912 cri.go:89] found id: ""
	I1213 19:41:34.172059  195912 logs.go:282] 0 containers: []
	W1213 19:41:34.172068  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:41:34.172075  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:41:34.172138  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:41:34.197674  195912 cri.go:89] found id: ""
	I1213 19:41:34.197700  195912 logs.go:282] 0 containers: []
	W1213 19:41:34.197709  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:41:34.197715  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:41:34.197773  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:41:34.223513  195912 cri.go:89] found id: ""
	I1213 19:41:34.223539  195912 logs.go:282] 0 containers: []
	W1213 19:41:34.223547  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:41:34.223554  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:41:34.223612  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:41:34.251658  195912 cri.go:89] found id: ""
	I1213 19:41:34.251684  195912 logs.go:282] 0 containers: []
	W1213 19:41:34.251692  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:41:34.251698  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:41:34.251756  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:41:34.277617  195912 cri.go:89] found id: ""
	I1213 19:41:34.277640  195912 logs.go:282] 0 containers: []
	W1213 19:41:34.277648  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:41:34.277654  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:41:34.277710  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:41:34.305881  195912 cri.go:89] found id: ""
	I1213 19:41:34.305958  195912 logs.go:282] 0 containers: []
	W1213 19:41:34.305980  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:41:34.306004  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:41:34.306045  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:41:34.374003  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:41:34.374040  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:41:34.389327  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:41:34.389355  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:41:34.454691  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:41:34.454754  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:41:34.454781  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:41:34.486320  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:41:34.486358  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:41:37.019650  195912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:41:37.029859  195912 kubeadm.go:602] duration metric: took 4m4.50574665s to restartPrimaryControlPlane
	W1213 19:41:37.029926  195912 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 19:41:37.029986  195912 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 19:41:37.453174  195912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:41:37.465940  195912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 19:41:37.473888  195912 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 19:41:37.473987  195912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 19:41:37.482923  195912 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 19:41:37.482944  195912 kubeadm.go:158] found existing configuration files:
	
	I1213 19:41:37.483008  195912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 19:41:37.491367  195912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 19:41:37.491453  195912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 19:41:37.499289  195912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 19:41:37.507374  195912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 19:41:37.507446  195912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 19:41:37.515333  195912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 19:41:37.523483  195912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 19:41:37.523555  195912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 19:41:37.531633  195912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 19:41:37.539803  195912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 19:41:37.539913  195912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 19:41:37.547478  195912 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 19:41:37.584767  195912 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 19:41:37.584827  195912 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 19:41:37.657615  195912 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 19:41:37.657690  195912 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 19:41:37.657731  195912 kubeadm.go:319] OS: Linux
	I1213 19:41:37.657780  195912 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 19:41:37.657835  195912 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 19:41:37.657886  195912 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 19:41:37.657938  195912 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 19:41:37.657989  195912 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 19:41:37.658041  195912 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 19:41:37.658089  195912 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 19:41:37.658142  195912 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 19:41:37.658193  195912 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 19:41:37.728255  195912 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 19:41:37.728374  195912 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 19:41:37.728471  195912 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 19:41:37.745544  195912 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 19:41:37.751728  195912 out.go:252]   - Generating certificates and keys ...
	I1213 19:41:37.751827  195912 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 19:41:37.751900  195912 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 19:41:37.751982  195912 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 19:41:37.752049  195912 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 19:41:37.752121  195912 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 19:41:37.752179  195912 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 19:41:37.752245  195912 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 19:41:37.752310  195912 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 19:41:37.752387  195912 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 19:41:37.752463  195912 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 19:41:37.752504  195912 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 19:41:37.752563  195912 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 19:41:38.090026  195912 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 19:41:38.443378  195912 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 19:41:39.121464  195912 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 19:41:39.348627  195912 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 19:41:39.853706  195912 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 19:41:39.854532  195912 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 19:41:39.866701  195912 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 19:41:39.870009  195912 out.go:252]   - Booting up control plane ...
	I1213 19:41:39.870121  195912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 19:41:39.870202  195912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 19:41:39.870921  195912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 19:41:39.891487  195912 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 19:41:39.891757  195912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 19:41:39.901989  195912 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 19:41:39.902338  195912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 19:41:39.902518  195912 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 19:41:40.079725  195912 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 19:41:40.079854  195912 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 19:45:40.080380  195912 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001035238s
	I1213 19:45:40.080417  195912 kubeadm.go:319] 
	I1213 19:45:40.080475  195912 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 19:45:40.080509  195912 kubeadm.go:319] 	- The kubelet is not running
	I1213 19:45:40.080614  195912 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 19:45:40.080620  195912 kubeadm.go:319] 
	I1213 19:45:40.080725  195912 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 19:45:40.080757  195912 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 19:45:40.080788  195912 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 19:45:40.080793  195912 kubeadm.go:319] 
	I1213 19:45:40.085198  195912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 19:45:40.085624  195912 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 19:45:40.085732  195912 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 19:45:40.086007  195912 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 19:45:40.086014  195912 kubeadm.go:319] 
	I1213 19:45:40.086083  195912 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 19:45:40.086193  195912 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001035238s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001035238s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 19:45:40.086276  195912 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 19:45:40.529415  195912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:45:40.543439  195912 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 19:45:40.543505  195912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 19:45:40.554335  195912 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 19:45:40.554351  195912 kubeadm.go:158] found existing configuration files:
	
	I1213 19:45:40.554400  195912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 19:45:40.563577  195912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 19:45:40.563695  195912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 19:45:40.571929  195912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 19:45:40.581358  195912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 19:45:40.581474  195912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 19:45:40.589798  195912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 19:45:40.598959  195912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 19:45:40.599071  195912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 19:45:40.607220  195912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 19:45:40.616570  195912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 19:45:40.616687  195912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 19:45:40.625231  195912 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 19:45:40.675837  195912 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 19:45:40.676418  195912 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 19:45:40.765930  195912 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 19:45:40.766078  195912 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 19:45:40.766166  195912 kubeadm.go:319] OS: Linux
	I1213 19:45:40.766244  195912 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 19:45:40.766312  195912 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 19:45:40.766399  195912 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 19:45:40.766472  195912 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 19:45:40.766562  195912 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 19:45:40.766628  195912 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 19:45:40.766678  195912 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 19:45:40.766739  195912 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 19:45:40.766791  195912 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 19:45:40.853439  195912 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 19:45:40.853621  195912 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 19:45:40.853742  195912 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 19:45:40.869210  195912 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 19:45:40.874902  195912 out.go:252]   - Generating certificates and keys ...
	I1213 19:45:40.875031  195912 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 19:45:40.875156  195912 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 19:45:40.875252  195912 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 19:45:40.875324  195912 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 19:45:40.875393  195912 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 19:45:40.875445  195912 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 19:45:40.875505  195912 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 19:45:40.875563  195912 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 19:45:40.875633  195912 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 19:45:40.875701  195912 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 19:45:40.875737  195912 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 19:45:40.875789  195912 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 19:45:41.065607  195912 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 19:45:41.550840  195912 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 19:45:41.745323  195912 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 19:45:42.351984  195912 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 19:45:42.836629  195912 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 19:45:42.837423  195912 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 19:45:42.850851  195912 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 19:45:42.853959  195912 out.go:252]   - Booting up control plane ...
	I1213 19:45:42.854067  195912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 19:45:42.854144  195912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 19:45:42.854212  195912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 19:45:42.872425  195912 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 19:45:42.872529  195912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 19:45:42.888535  195912 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 19:45:42.889155  195912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 19:45:42.889389  195912 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 19:45:43.091592  195912 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 19:45:43.091871  195912 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 19:49:43.093650  195912 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001935047s
	I1213 19:49:43.093686  195912 kubeadm.go:319] 
	I1213 19:49:43.093789  195912 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 19:49:43.093854  195912 kubeadm.go:319] 	- The kubelet is not running
	I1213 19:49:43.093959  195912 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 19:49:43.093966  195912 kubeadm.go:319] 
	I1213 19:49:43.094071  195912 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 19:49:43.094103  195912 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 19:49:43.094134  195912 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 19:49:43.094139  195912 kubeadm.go:319] 
	I1213 19:49:43.098110  195912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 19:49:43.098543  195912 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 19:49:43.098655  195912 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 19:49:43.098908  195912 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 19:49:43.098917  195912 kubeadm.go:319] 
	I1213 19:49:43.098998  195912 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 19:49:43.099066  195912 kubeadm.go:403] duration metric: took 12m10.617541701s to StartCluster
	I1213 19:49:43.099119  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:49:43.099182  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:49:43.124755  195912 cri.go:89] found id: ""
	I1213 19:49:43.124777  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.124786  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:49:43.124792  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:49:43.124864  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:49:43.150651  195912 cri.go:89] found id: ""
	I1213 19:49:43.150677  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.150686  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:49:43.150692  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:49:43.150750  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:49:43.179821  195912 cri.go:89] found id: ""
	I1213 19:49:43.179844  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.179853  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:49:43.179859  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:49:43.179917  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:49:43.206244  195912 cri.go:89] found id: ""
	I1213 19:49:43.206266  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.206274  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:49:43.206281  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:49:43.206337  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:49:43.233517  195912 cri.go:89] found id: ""
	I1213 19:49:43.233542  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.233551  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:49:43.233557  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:49:43.233619  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:49:43.260154  195912 cri.go:89] found id: ""
	I1213 19:49:43.260182  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.260190  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:49:43.260196  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:49:43.260258  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:49:43.288804  195912 cri.go:89] found id: ""
	I1213 19:49:43.288833  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.288842  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:49:43.288847  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:49:43.288945  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:49:43.316457  195912 cri.go:89] found id: ""
	I1213 19:49:43.316486  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.316495  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:49:43.316505  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:49:43.316516  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:49:43.413739  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:49:43.413790  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:49:43.430594  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:49:43.430634  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:49:43.504376  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:49:43.504397  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:49:43.504409  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:49:43.538085  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:49:43.538123  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 19:49:43.571637  195912 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001935047s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 19:49:43.571694  195912 out.go:285] * 
	* 
	W1213 19:49:43.571750  195912 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001935047s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001935047s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 19:49:43.571782  195912 out.go:285] * 
	* 
	W1213 19:49:43.574330  195912 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 19:49:43.579795  195912 out.go:203] 
	W1213 19:49:43.582736  195912 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001935047s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001935047s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 19:49:43.582816  195912 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 19:49:43.582845  195912 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 19:49:43.585949  195912 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-203932 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-203932 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-203932 version --output=json: exit status 1 (134.250239ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-13 19:49:44.147944961 +0000 UTC m=+5577.618087831
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-203932
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-203932:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c1afffdcf2832ec51d0897e974ec498289e5e1740fd4e3a6f998904966a92f47",
	        "Created": "2025-12-13T19:36:37.916503954Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196069,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T19:37:17.526696441Z",
	            "FinishedAt": "2025-12-13T19:37:16.154620069Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/c1afffdcf2832ec51d0897e974ec498289e5e1740fd4e3a6f998904966a92f47/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c1afffdcf2832ec51d0897e974ec498289e5e1740fd4e3a6f998904966a92f47/hostname",
	        "HostsPath": "/var/lib/docker/containers/c1afffdcf2832ec51d0897e974ec498289e5e1740fd4e3a6f998904966a92f47/hosts",
	        "LogPath": "/var/lib/docker/containers/c1afffdcf2832ec51d0897e974ec498289e5e1740fd4e3a6f998904966a92f47/c1afffdcf2832ec51d0897e974ec498289e5e1740fd4e3a6f998904966a92f47-json.log",
	        "Name": "/kubernetes-upgrade-203932",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-203932:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-203932",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c1afffdcf2832ec51d0897e974ec498289e5e1740fd4e3a6f998904966a92f47",
	                "LowerDir": "/var/lib/docker/overlay2/43a21de5108f3537ced92c15d79b96de4b573556d53bc38f7e97ae9a55e1efb7-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/43a21de5108f3537ced92c15d79b96de4b573556d53bc38f7e97ae9a55e1efb7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/43a21de5108f3537ced92c15d79b96de4b573556d53bc38f7e97ae9a55e1efb7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/43a21de5108f3537ced92c15d79b96de4b573556d53bc38f7e97ae9a55e1efb7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-203932",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-203932/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-203932",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-203932",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-203932",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0379a48e5df90111c63bb7a5be78b180dff950a46a80ead9cd8cc24232f727a",
	            "SandboxKey": "/var/run/docker/netns/a0379a48e5df",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33003"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33004"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33007"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33005"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33006"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-203932": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:a4:2c:84:70:dd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "13f44a88ad038208c0d9f4ea74757656e781ca41071022b92cab8cd0c23b2022",
	                    "EndpointID": "85c0ce3a50a46596f4e9cc64d9065287b5687cd70594e5f8d17b870607621b8d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-203932",
	                        "c1afffdcf283"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-203932 -n kubernetes-upgrade-203932
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-203932 -n kubernetes-upgrade-203932: exit status 2 (317.257246ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-203932 logs -n 25
E1213 19:49:44.920667    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-229943 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo systemctl status docker --all --full --no-pager                                      │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo systemctl cat docker --no-pager                                                      │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo cat /etc/docker/daemon.json                                                          │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo docker system info                                                                   │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo cri-dockerd --version                                                                │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo systemctl cat containerd --no-pager                                                  │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo cat /etc/containerd/config.toml                                                      │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo containerd config dump                                                               │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo systemctl status crio --all --full --no-pager                                        │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo systemctl cat crio --no-pager                                                        │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ ssh     │ -p cilium-229943 sudo crio config                                                                          │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │                     │
	│ delete  │ -p cilium-229943                                                                                           │ cilium-229943            │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │ 13 Dec 25 19:46 UTC │
	│ start   │ -p force-systemd-env-215695 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-215695 │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │ 13 Dec 25 19:46 UTC │
	│ delete  │ -p force-systemd-env-215695                                                                                │ force-systemd-env-215695 │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │ 13 Dec 25 19:46 UTC │
	│ start   │ -p cert-expiration-609685 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-609685   │ jenkins │ v1.37.0 │ 13 Dec 25 19:46 UTC │ 13 Dec 25 19:47 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 19:46:46
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:46:46.385959  232162 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:46:46.386376  232162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:46:46.386389  232162 out.go:374] Setting ErrFile to fd 2...
	I1213 19:46:46.386410  232162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:46:46.386807  232162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:46:46.387244  232162 out.go:368] Setting JSON to false
	I1213 19:46:46.388078  232162 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8959,"bootTime":1765646248,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 19:46:46.388137  232162 start.go:143] virtualization:  
	I1213 19:46:46.391847  232162 out.go:179] * [cert-expiration-609685] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 19:46:46.396307  232162 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 19:46:46.396399  232162 notify.go:221] Checking for updates...
	I1213 19:46:46.403094  232162 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:46:46.406349  232162 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:46:46.409659  232162 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 19:46:46.412634  232162 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 19:46:46.415765  232162 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:46:46.419449  232162 config.go:182] Loaded profile config "kubernetes-upgrade-203932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 19:46:46.419536  232162 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 19:46:46.455415  232162 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 19:46:46.455537  232162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:46:46.519673  232162 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 19:46:46.50989737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:46:46.519767  232162 docker.go:319] overlay module found
	I1213 19:46:46.524804  232162 out.go:179] * Using the docker driver based on user configuration
	I1213 19:46:46.527840  232162 start.go:309] selected driver: docker
	I1213 19:46:46.527848  232162 start.go:927] validating driver "docker" against <nil>
	I1213 19:46:46.527860  232162 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:46:46.528596  232162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:46:46.589955  232162 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 19:46:46.580634309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:46:46.590099  232162 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 19:46:46.590308  232162 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 19:46:46.593489  232162 out.go:179] * Using Docker driver with root privileges
	I1213 19:46:46.596538  232162 cni.go:84] Creating CNI manager for ""
	I1213 19:46:46.596597  232162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:46:46.596605  232162 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 19:46:46.596685  232162 start.go:353] cluster config:
	{Name:cert-expiration-609685 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-609685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:46:46.601778  232162 out.go:179] * Starting "cert-expiration-609685" primary control-plane node in "cert-expiration-609685" cluster
	I1213 19:46:46.604720  232162 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:46:46.607657  232162 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:46:46.610588  232162 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:46:46.610624  232162 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 19:46:46.610632  232162 cache.go:65] Caching tarball of preloaded images
	I1213 19:46:46.610729  232162 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:46:46.610738  232162 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 19:46:46.610850  232162 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/config.json ...
	I1213 19:46:46.610870  232162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/config.json: {Name:mkeece47e0a9e51ece5571b05250d986fc1f4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:46:46.611023  232162 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:46:46.630510  232162 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:46:46.630522  232162 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:46:46.630534  232162 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:46:46.630560  232162 start.go:360] acquireMachinesLock for cert-expiration-609685: {Name:mk22979183262c53892e5acb64d9801a283cedff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:46:46.630654  232162 start.go:364] duration metric: took 80.731µs to acquireMachinesLock for "cert-expiration-609685"
	I1213 19:46:46.630678  232162 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-609685 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-609685 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:46:46.630748  232162 start.go:125] createHost starting for "" (driver="docker")
	I1213 19:46:46.634200  232162 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 19:46:46.634420  232162 start.go:159] libmachine.API.Create for "cert-expiration-609685" (driver="docker")
	I1213 19:46:46.634453  232162 client.go:173] LocalClient.Create starting
	I1213 19:46:46.634545  232162 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem
	I1213 19:46:46.634577  232162 main.go:143] libmachine: Decoding PEM data...
	I1213 19:46:46.634592  232162 main.go:143] libmachine: Parsing certificate...
	I1213 19:46:46.634637  232162 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem
	I1213 19:46:46.634661  232162 main.go:143] libmachine: Decoding PEM data...
	I1213 19:46:46.634674  232162 main.go:143] libmachine: Parsing certificate...
	I1213 19:46:46.635060  232162 cli_runner.go:164] Run: docker network inspect cert-expiration-609685 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 19:46:46.651278  232162 cli_runner.go:211] docker network inspect cert-expiration-609685 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 19:46:46.651347  232162 network_create.go:284] running [docker network inspect cert-expiration-609685] to gather additional debugging logs...
	I1213 19:46:46.651362  232162 cli_runner.go:164] Run: docker network inspect cert-expiration-609685
	W1213 19:46:46.671323  232162 cli_runner.go:211] docker network inspect cert-expiration-609685 returned with exit code 1
	I1213 19:46:46.671343  232162 network_create.go:287] error running [docker network inspect cert-expiration-609685]: docker network inspect cert-expiration-609685: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-609685 not found
	I1213 19:46:46.671355  232162 network_create.go:289] output of [docker network inspect cert-expiration-609685]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-609685 not found
	
	** /stderr **
	I1213 19:46:46.671450  232162 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:46:46.688054  232162 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a2f3617b1da5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ee:bd:c1:14:a9:f1} reservation:<nil>}
	I1213 19:46:46.688413  232162 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a8c05e63b461 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:d2:24:84:48:72} reservation:<nil>}
	I1213 19:46:46.688753  232162 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-927c2c3b273e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:dd:e7:50:dd:21} reservation:<nil>}
	I1213 19:46:46.689219  232162 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-13f44a88ad03 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5a:d1:7e:9e:67:f0} reservation:<nil>}
	I1213 19:46:46.689699  232162 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019eb1b0}
	I1213 19:46:46.689715  232162 network_create.go:124] attempt to create docker network cert-expiration-609685 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 19:46:46.689776  232162 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-609685 cert-expiration-609685
	I1213 19:46:46.750756  232162 network_create.go:108] docker network cert-expiration-609685 192.168.85.0/24 created
	I1213 19:46:46.750777  232162 kic.go:121] calculated static IP "192.168.85.2" for the "cert-expiration-609685" container
	I1213 19:46:46.750854  232162 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 19:46:46.767466  232162 cli_runner.go:164] Run: docker volume create cert-expiration-609685 --label name.minikube.sigs.k8s.io=cert-expiration-609685 --label created_by.minikube.sigs.k8s.io=true
	I1213 19:46:46.784545  232162 oci.go:103] Successfully created a docker volume cert-expiration-609685
	I1213 19:46:46.784638  232162 cli_runner.go:164] Run: docker run --rm --name cert-expiration-609685-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-609685 --entrypoint /usr/bin/test -v cert-expiration-609685:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 19:46:47.345980  232162 oci.go:107] Successfully prepared a docker volume cert-expiration-609685
	I1213 19:46:47.346045  232162 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:46:47.346052  232162 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 19:46:47.346126  232162 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-609685:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 19:46:51.380640  232162 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-609685:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.034466354s)
	I1213 19:46:51.380660  232162 kic.go:203] duration metric: took 4.034604742s to extract preloaded images to volume ...
	W1213 19:46:51.380796  232162 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 19:46:51.380896  232162 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 19:46:51.454516  232162 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-609685 --name cert-expiration-609685 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-609685 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-609685 --network cert-expiration-609685 --ip 192.168.85.2 --volume cert-expiration-609685:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 19:46:51.772454  232162 cli_runner.go:164] Run: docker container inspect cert-expiration-609685 --format={{.State.Running}}
	I1213 19:46:51.797384  232162 cli_runner.go:164] Run: docker container inspect cert-expiration-609685 --format={{.State.Status}}
	I1213 19:46:51.822288  232162 cli_runner.go:164] Run: docker exec cert-expiration-609685 stat /var/lib/dpkg/alternatives/iptables
	I1213 19:46:51.871468  232162 oci.go:144] the created container "cert-expiration-609685" has a running status.
	I1213 19:46:51.871487  232162 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/cert-expiration-609685/id_rsa...
	I1213 19:46:52.089896  232162 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-2686/.minikube/machines/cert-expiration-609685/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 19:46:52.115500  232162 cli_runner.go:164] Run: docker container inspect cert-expiration-609685 --format={{.State.Status}}
	I1213 19:46:52.136551  232162 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 19:46:52.136563  232162 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-609685 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 19:46:52.208302  232162 cli_runner.go:164] Run: docker container inspect cert-expiration-609685 --format={{.State.Status}}
	I1213 19:46:52.244695  232162 machine.go:94] provisionDockerMachine start ...
	I1213 19:46:52.245288  232162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-609685
	I1213 19:46:52.272197  232162 main.go:143] libmachine: Using SSH client type: native
	I1213 19:46:52.272810  232162 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1213 19:46:52.272817  232162 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:46:52.273523  232162 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 19:46:55.436715  232162 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-609685
	
	I1213 19:46:55.436731  232162 ubuntu.go:182] provisioning hostname "cert-expiration-609685"
	I1213 19:46:55.436793  232162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-609685
	I1213 19:46:55.453686  232162 main.go:143] libmachine: Using SSH client type: native
	I1213 19:46:55.454011  232162 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1213 19:46:55.454021  232162 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-609685 && echo "cert-expiration-609685" | sudo tee /etc/hostname
	I1213 19:46:55.614915  232162 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-609685
	
	I1213 19:46:55.614989  232162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-609685
	I1213 19:46:55.631868  232162 main.go:143] libmachine: Using SSH client type: native
	I1213 19:46:55.632185  232162 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1213 19:46:55.632199  232162 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-609685' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-609685/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-609685' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:46:55.781001  232162 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:46:55.781035  232162 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:46:55.781059  232162 ubuntu.go:190] setting up certificates
	I1213 19:46:55.781068  232162 provision.go:84] configureAuth start
	I1213 19:46:55.781125  232162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-609685
	I1213 19:46:55.797649  232162 provision.go:143] copyHostCerts
	I1213 19:46:55.797709  232162 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:46:55.797719  232162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:46:55.797798  232162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:46:55.797892  232162 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:46:55.797896  232162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:46:55.797920  232162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:46:55.797967  232162 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:46:55.797970  232162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:46:55.797993  232162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:46:55.798035  232162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-609685 san=[127.0.0.1 192.168.85.2 cert-expiration-609685 localhost minikube]
	I1213 19:46:55.997475  232162 provision.go:177] copyRemoteCerts
	I1213 19:46:55.997535  232162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:46:55.997584  232162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-609685
	I1213 19:46:56.016381  232162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/cert-expiration-609685/id_rsa Username:docker}
	I1213 19:46:56.121824  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:46:56.146440  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 19:46:56.164320  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 19:46:56.182740  232162 provision.go:87] duration metric: took 401.659663ms to configureAuth
	I1213 19:46:56.182757  232162 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:46:56.182943  232162 config.go:182] Loaded profile config "cert-expiration-609685": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:46:56.183054  232162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-609685
	I1213 19:46:56.201065  232162 main.go:143] libmachine: Using SSH client type: native
	I1213 19:46:56.201418  232162 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1213 19:46:56.201429  232162 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:46:56.502405  232162 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:46:56.502436  232162 machine.go:97] duration metric: took 4.257684015s to provisionDockerMachine
	I1213 19:46:56.502445  232162 client.go:176] duration metric: took 9.867988208s to LocalClient.Create
	I1213 19:46:56.502463  232162 start.go:167] duration metric: took 9.868044192s to libmachine.API.Create "cert-expiration-609685"
	I1213 19:46:56.502469  232162 start.go:293] postStartSetup for "cert-expiration-609685" (driver="docker")
	I1213 19:46:56.502479  232162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:46:56.502553  232162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:46:56.502593  232162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-609685
	I1213 19:46:56.519771  232162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/cert-expiration-609685/id_rsa Username:docker}
	I1213 19:46:56.624726  232162 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:46:56.627875  232162 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:46:56.627892  232162 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:46:56.627902  232162 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:46:56.627955  232162 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:46:56.628050  232162 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:46:56.628144  232162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:46:56.635415  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:46:56.652520  232162 start.go:296] duration metric: took 150.037489ms for postStartSetup
	I1213 19:46:56.652870  232162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-609685
	I1213 19:46:56.670011  232162 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/config.json ...
	I1213 19:46:56.670302  232162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:46:56.670339  232162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-609685
	I1213 19:46:56.686774  232162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/cert-expiration-609685/id_rsa Username:docker}
	I1213 19:46:56.790145  232162 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:46:56.794653  232162 start.go:128] duration metric: took 10.163892549s to createHost
	I1213 19:46:56.794667  232162 start.go:83] releasing machines lock for "cert-expiration-609685", held for 10.164005878s
	I1213 19:46:56.794735  232162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-609685
	I1213 19:46:56.811709  232162 ssh_runner.go:195] Run: cat /version.json
	I1213 19:46:56.811751  232162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-609685
	I1213 19:46:56.812585  232162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:46:56.812639  232162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-609685
	I1213 19:46:56.833789  232162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/cert-expiration-609685/id_rsa Username:docker}
	I1213 19:46:56.840423  232162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/cert-expiration-609685/id_rsa Username:docker}
	I1213 19:46:57.040559  232162 ssh_runner.go:195] Run: systemctl --version
	I1213 19:46:57.046971  232162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:46:57.082668  232162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:46:57.086776  232162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:46:57.086835  232162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:46:57.119706  232162 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 19:46:57.119721  232162 start.go:496] detecting cgroup driver to use...
	I1213 19:46:57.119760  232162 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:46:57.119816  232162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:46:57.138608  232162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:46:57.151397  232162 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:46:57.151452  232162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:46:57.169375  232162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:46:57.187920  232162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:46:57.309350  232162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:46:57.444231  232162 docker.go:234] disabling docker service ...
	I1213 19:46:57.444287  232162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:46:57.465410  232162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:46:57.478664  232162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:46:57.600033  232162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:46:57.728829  232162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:46:57.741766  232162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:46:57.755323  232162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:46:57.755379  232162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:46:57.764156  232162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:46:57.764214  232162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:46:57.773173  232162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:46:57.781877  232162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:46:57.790262  232162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:46:57.798140  232162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:46:57.806767  232162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:46:57.819920  232162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:46:57.828551  232162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:46:57.836115  232162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:46:57.843536  232162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:46:57.955087  232162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:46:58.136167  232162 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:46:58.136230  232162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:46:58.139920  232162 start.go:564] Will wait 60s for crictl version
	I1213 19:46:58.139975  232162 ssh_runner.go:195] Run: which crictl
	I1213 19:46:58.143417  232162 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:46:58.166756  232162 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:46:58.166847  232162 ssh_runner.go:195] Run: crio --version
	I1213 19:46:58.194836  232162 ssh_runner.go:195] Run: crio --version
	I1213 19:46:58.228213  232162 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 19:46:58.231091  232162 cli_runner.go:164] Run: docker network inspect cert-expiration-609685 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:46:58.246910  232162 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 19:46:58.250608  232162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:46:58.260223  232162 kubeadm.go:884] updating cluster {Name:cert-expiration-609685 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-609685 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:46:58.260336  232162 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:46:58.260390  232162 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:46:58.293912  232162 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:46:58.293923  232162 crio.go:433] Images already preloaded, skipping extraction
	I1213 19:46:58.293977  232162 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:46:58.320598  232162 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:46:58.320609  232162 cache_images.go:86] Images are preloaded, skipping loading
	I1213 19:46:58.320615  232162 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1213 19:46:58.320705  232162 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-609685 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-609685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:46:58.320784  232162 ssh_runner.go:195] Run: crio config
	I1213 19:46:58.395183  232162 cni.go:84] Creating CNI manager for ""
	I1213 19:46:58.395194  232162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:46:58.395210  232162 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 19:46:58.395231  232162 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-609685 NodeName:cert-expiration-609685 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:46:58.395359  232162 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-609685"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:46:58.395428  232162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 19:46:58.405083  232162 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:46:58.405161  232162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 19:46:58.413530  232162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1213 19:46:58.426326  232162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:46:58.439699  232162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1213 19:46:58.452645  232162 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 19:46:58.456257  232162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:46:58.466376  232162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:46:58.586280  232162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:46:58.607226  232162 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685 for IP: 192.168.85.2
	I1213 19:46:58.607236  232162 certs.go:195] generating shared ca certs ...
	I1213 19:46:58.607256  232162 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:46:58.607396  232162 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:46:58.607444  232162 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:46:58.607457  232162 certs.go:257] generating profile certs ...
	I1213 19:46:58.607511  232162 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/client.key
	I1213 19:46:58.607521  232162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/client.crt with IP's: []
	I1213 19:46:58.837288  232162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/client.crt ...
	I1213 19:46:58.837304  232162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/client.crt: {Name:mk9eb1bf83e8406fb15b4e730204ea2323fcb49b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:46:58.837510  232162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/client.key ...
	I1213 19:46:58.837520  232162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/client.key: {Name:mk363aa9fff85e147d6ba6260487578c1e356cc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:46:58.837627  232162 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/apiserver.key.59ecd64d
	I1213 19:46:58.837640  232162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/apiserver.crt.59ecd64d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 19:46:59.487418  232162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/apiserver.crt.59ecd64d ...
	I1213 19:46:59.487433  232162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/apiserver.crt.59ecd64d: {Name:mk3dbb35c9a819a473e9ad44f7b70e5cfede1bc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:46:59.487635  232162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/apiserver.key.59ecd64d ...
	I1213 19:46:59.487643  232162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/apiserver.key.59ecd64d: {Name:mk310a024e64e500f8912e19572ac6369298549b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:46:59.487729  232162 certs.go:382] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/apiserver.crt.59ecd64d -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/apiserver.crt
	I1213 19:46:59.487806  232162 certs.go:386] copying /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/apiserver.key.59ecd64d -> /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/apiserver.key
	I1213 19:46:59.487859  232162 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/proxy-client.key
	I1213 19:46:59.487873  232162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/proxy-client.crt with IP's: []
	I1213 19:46:59.823641  232162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/proxy-client.crt ...
	I1213 19:46:59.823656  232162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/proxy-client.crt: {Name:mke8d355b374fb38ad25083e6ed9d7cb52353c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:46:59.823874  232162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/proxy-client.key ...
	I1213 19:46:59.823882  232162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/proxy-client.key: {Name:mk2390c13f88de3abc139fe1fe81197432494692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:46:59.824067  232162 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:46:59.824104  232162 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:46:59.824111  232162 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:46:59.824136  232162 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:46:59.824158  232162 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:46:59.824180  232162 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:46:59.824229  232162 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:46:59.824885  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:46:59.852084  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:46:59.869284  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:46:59.886734  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:46:59.904125  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 19:46:59.921728  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:46:59.939560  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:46:59.956907  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/cert-expiration-609685/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 19:46:59.996269  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:47:00.142697  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:47:00.269604  232162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:47:00.323054  232162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:47:00.345070  232162 ssh_runner.go:195] Run: openssl version
	I1213 19:47:00.356757  232162 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:47:00.375249  232162 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:47:00.386584  232162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:47:00.391105  232162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:47:00.391173  232162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:47:00.436092  232162 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:47:00.444853  232162 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 19:47:00.453833  232162 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:47:00.461972  232162 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:47:00.470117  232162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:47:00.474408  232162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:47:00.474470  232162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:47:00.518200  232162 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:47:00.526055  232162 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4637.pem /etc/ssl/certs/51391683.0
	I1213 19:47:00.533431  232162 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:47:00.540983  232162 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:47:00.548879  232162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:47:00.552514  232162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:47:00.552575  232162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:47:00.594977  232162 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:47:00.603368  232162 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/46372.pem /etc/ssl/certs/3ec20f2e.0
	I1213 19:47:00.611384  232162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:47:00.616004  232162 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 19:47:00.616046  232162 kubeadm.go:401] StartCluster: {Name:cert-expiration-609685 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-609685 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:47:00.616108  232162 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:47:00.616173  232162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:47:00.646574  232162 cri.go:89] found id: ""
	I1213 19:47:00.646642  232162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:47:00.654623  232162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 19:47:00.662380  232162 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 19:47:00.662439  232162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 19:47:00.670577  232162 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 19:47:00.670589  232162 kubeadm.go:158] found existing configuration files:
	
	I1213 19:47:00.670638  232162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 19:47:00.678276  232162 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 19:47:00.678329  232162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 19:47:00.685597  232162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 19:47:00.692843  232162 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 19:47:00.692897  232162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 19:47:00.699982  232162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 19:47:00.707721  232162 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 19:47:00.707778  232162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 19:47:00.714974  232162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 19:47:00.722371  232162 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 19:47:00.722423  232162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 19:47:00.729412  232162 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 19:47:00.769903  232162 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 19:47:00.769954  232162 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 19:47:00.793055  232162 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 19:47:00.793118  232162 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 19:47:00.793173  232162 kubeadm.go:319] OS: Linux
	I1213 19:47:00.793230  232162 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 19:47:00.793294  232162 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 19:47:00.793341  232162 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 19:47:00.793387  232162 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 19:47:00.793458  232162 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 19:47:00.793508  232162 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 19:47:00.793576  232162 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 19:47:00.793639  232162 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 19:47:00.793693  232162 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 19:47:00.863707  232162 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 19:47:00.863827  232162 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 19:47:00.863956  232162 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 19:47:00.871836  232162 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 19:47:00.878210  232162 out.go:252]   - Generating certificates and keys ...
	I1213 19:47:00.878295  232162 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 19:47:00.878358  232162 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 19:47:01.357382  232162 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 19:47:02.926162  232162 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 19:47:03.354788  232162 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 19:47:03.838583  232162 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 19:47:04.026716  232162 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 19:47:04.026868  232162 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-609685 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 19:47:04.916935  232162 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 19:47:04.917194  232162 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-609685 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 19:47:05.819204  232162 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 19:47:07.857000  232162 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 19:47:08.328300  232162 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 19:47:08.328585  232162 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 19:47:08.504355  232162 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 19:47:08.925714  232162 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 19:47:09.192842  232162 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 19:47:09.350545  232162 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 19:47:10.272479  232162 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 19:47:10.273054  232162 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 19:47:10.275757  232162 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 19:47:10.279282  232162 out.go:252]   - Booting up control plane ...
	I1213 19:47:10.279391  232162 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 19:47:10.279471  232162 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 19:47:10.279536  232162 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 19:47:10.295981  232162 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 19:47:10.296090  232162 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 19:47:10.305483  232162 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 19:47:10.305584  232162 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 19:47:10.305623  232162 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 19:47:10.450864  232162 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 19:47:10.450978  232162 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 19:47:12.453421  232162 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000975201s
	I1213 19:47:12.454091  232162 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 19:47:12.454328  232162 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1213 19:47:12.454615  232162 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 19:47:12.454697  232162 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 19:47:16.026025  232162 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.571384788s
	I1213 19:47:19.253806  232162 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.799549584s
	I1213 19:47:19.456341  232162 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.00189531s
	I1213 19:47:19.488516  232162 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 19:47:19.504573  232162 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 19:47:19.518688  232162 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 19:47:19.518903  232162 kubeadm.go:319] [mark-control-plane] Marking the node cert-expiration-609685 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 19:47:19.531577  232162 kubeadm.go:319] [bootstrap-token] Using token: kqbgzd.5z7xujtmk486c6p8
	I1213 19:47:19.534592  232162 out.go:252]   - Configuring RBAC rules ...
	I1213 19:47:19.534709  232162 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 19:47:19.538775  232162 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 19:47:19.546847  232162 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 19:47:19.553875  232162 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 19:47:19.558509  232162 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 19:47:19.562892  232162 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 19:47:19.863649  232162 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 19:47:20.300214  232162 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 19:47:20.868362  232162 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 19:47:20.869968  232162 kubeadm.go:319] 
	I1213 19:47:20.870033  232162 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 19:47:20.870037  232162 kubeadm.go:319] 
	I1213 19:47:20.870113  232162 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 19:47:20.870116  232162 kubeadm.go:319] 
	I1213 19:47:20.870140  232162 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 19:47:20.870197  232162 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 19:47:20.870247  232162 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 19:47:20.870249  232162 kubeadm.go:319] 
	I1213 19:47:20.870302  232162 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 19:47:20.870304  232162 kubeadm.go:319] 
	I1213 19:47:20.870351  232162 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 19:47:20.870353  232162 kubeadm.go:319] 
	I1213 19:47:20.870404  232162 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 19:47:20.870477  232162 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 19:47:20.870544  232162 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 19:47:20.870547  232162 kubeadm.go:319] 
	I1213 19:47:20.870637  232162 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 19:47:20.870713  232162 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 19:47:20.870722  232162 kubeadm.go:319] 
	I1213 19:47:20.870805  232162 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kqbgzd.5z7xujtmk486c6p8 \
	I1213 19:47:20.870907  232162 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c855727c547190fbfc8dabe20c5acea2e54aecf6fee3a83d21da995a7e3060d \
	I1213 19:47:20.870926  232162 kubeadm.go:319] 	--control-plane 
	I1213 19:47:20.870929  232162 kubeadm.go:319] 
	I1213 19:47:20.871012  232162 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 19:47:20.871015  232162 kubeadm.go:319] 
	I1213 19:47:20.871095  232162 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kqbgzd.5z7xujtmk486c6p8 \
	I1213 19:47:20.871196  232162 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:5c855727c547190fbfc8dabe20c5acea2e54aecf6fee3a83d21da995a7e3060d 
	I1213 19:47:20.875842  232162 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 19:47:20.876061  232162 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 19:47:20.876165  232162 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 19:47:20.876178  232162 cni.go:84] Creating CNI manager for ""
	I1213 19:47:20.876184  232162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:47:20.879413  232162 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 19:47:20.888404  232162 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 19:47:20.900071  232162 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 19:47:20.900083  232162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 19:47:20.926329  232162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 19:47:21.215095  232162 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 19:47:21.215260  232162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:47:21.215337  232162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-609685 minikube.k8s.io/updated_at=2025_12_13T19_47_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=cert-expiration-609685 minikube.k8s.io/primary=true
	I1213 19:47:21.425135  232162 ops.go:34] apiserver oom_adj: -16
	I1213 19:47:21.425155  232162 kubeadm.go:1114] duration metric: took 209.964414ms to wait for elevateKubeSystemPrivileges
	I1213 19:47:21.425167  232162 kubeadm.go:403] duration metric: took 20.809124286s to StartCluster
	I1213 19:47:21.425182  232162 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:47:21.425245  232162 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:47:21.426127  232162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:47:21.426341  232162 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:47:21.426422  232162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 19:47:21.426662  232162 config.go:182] Loaded profile config "cert-expiration-609685": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:47:21.426698  232162 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 19:47:21.426757  232162 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-609685"
	I1213 19:47:21.426775  232162 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-609685"
	I1213 19:47:21.426796  232162 host.go:66] Checking if "cert-expiration-609685" exists ...
	I1213 19:47:21.427304  232162 cli_runner.go:164] Run: docker container inspect cert-expiration-609685 --format={{.State.Status}}
	I1213 19:47:21.427760  232162 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-609685"
	I1213 19:47:21.427773  232162 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-609685"
	I1213 19:47:21.428052  232162 cli_runner.go:164] Run: docker container inspect cert-expiration-609685 --format={{.State.Status}}
	I1213 19:47:21.430824  232162 out.go:179] * Verifying Kubernetes components...
	I1213 19:47:21.438121  232162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:47:21.475339  232162 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-609685"
	I1213 19:47:21.475365  232162 host.go:66] Checking if "cert-expiration-609685" exists ...
	I1213 19:47:21.475787  232162 cli_runner.go:164] Run: docker container inspect cert-expiration-609685 --format={{.State.Status}}
	I1213 19:47:21.502093  232162 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 19:47:21.504892  232162 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:47:21.504904  232162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 19:47:21.504969  232162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-609685
	I1213 19:47:21.512619  232162 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 19:47:21.512631  232162 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 19:47:21.512696  232162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-609685
	I1213 19:47:21.543375  232162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/cert-expiration-609685/id_rsa Username:docker}
	I1213 19:47:21.558692  232162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/cert-expiration-609685/id_rsa Username:docker}
	I1213 19:47:21.761255  232162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 19:47:21.761356  232162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:47:21.763976  232162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 19:47:21.856928  232162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:47:22.199083  232162 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:47:22.199131  232162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:47:22.199216  232162 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1213 19:47:22.455013  232162 api_server.go:72] duration metric: took 1.028647391s to wait for apiserver process to appear ...
	I1213 19:47:22.455124  232162 api_server.go:88] waiting for apiserver healthz status ...
	I1213 19:47:22.455163  232162 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 19:47:22.458001  232162 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1213 19:47:22.461108  232162 addons.go:530] duration metric: took 1.034403819s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1213 19:47:22.468999  232162 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 19:47:22.473810  232162 api_server.go:141] control plane version: v1.34.2
	I1213 19:47:22.473828  232162 api_server.go:131] duration metric: took 18.698736ms to wait for apiserver health ...
	I1213 19:47:22.473836  232162 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 19:47:22.477049  232162 system_pods.go:59] 5 kube-system pods found
	I1213 19:47:22.477070  232162 system_pods.go:61] "etcd-cert-expiration-609685" [d5d58052-4ea7-499d-8b56-c1eeea917720] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 19:47:22.477078  232162 system_pods.go:61] "kube-apiserver-cert-expiration-609685" [5700e66e-c235-4051-9a9d-f7f94ee0bcea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 19:47:22.477085  232162 system_pods.go:61] "kube-controller-manager-cert-expiration-609685" [940720b8-b3ce-4fda-8295-c70b09e06394] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 19:47:22.477090  232162 system_pods.go:61] "kube-scheduler-cert-expiration-609685" [ce9ec2ef-53c7-4eaa-85cc-8414f9de5138] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 19:47:22.477094  232162 system_pods.go:61] "storage-provisioner" [0bb250c3-6989-4d74-8c50-331a32250f7d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 19:47:22.477098  232162 system_pods.go:74] duration metric: took 3.258363ms to wait for pod list to return data ...
	I1213 19:47:22.477109  232162 kubeadm.go:587] duration metric: took 1.050747458s to wait for: map[apiserver:true system_pods:true]
	I1213 19:47:22.477120  232162 node_conditions.go:102] verifying NodePressure condition ...
	I1213 19:47:22.480141  232162 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 19:47:22.480161  232162 node_conditions.go:123] node cpu capacity is 2
	I1213 19:47:22.480172  232162 node_conditions.go:105] duration metric: took 3.048229ms to run NodePressure ...
	I1213 19:47:22.480183  232162 start.go:242] waiting for startup goroutines ...
	I1213 19:47:22.702753  232162 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-609685" context rescaled to 1 replicas
	I1213 19:47:22.702774  232162 start.go:247] waiting for cluster config update ...
	I1213 19:47:22.702784  232162 start.go:256] writing updated cluster config ...
	I1213 19:47:22.703074  232162 ssh_runner.go:195] Run: rm -f paused
	I1213 19:47:22.778721  232162 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 19:47:22.781956  232162 out.go:179] * Done! kubectl is now configured to use "cert-expiration-609685" cluster and "default" namespace by default
	I1213 19:49:43.093650  195912 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001935047s
	I1213 19:49:43.093686  195912 kubeadm.go:319] 
	I1213 19:49:43.093789  195912 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 19:49:43.093854  195912 kubeadm.go:319] 	- The kubelet is not running
	I1213 19:49:43.093959  195912 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 19:49:43.093966  195912 kubeadm.go:319] 
	I1213 19:49:43.094071  195912 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 19:49:43.094103  195912 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 19:49:43.094134  195912 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 19:49:43.094139  195912 kubeadm.go:319] 
	I1213 19:49:43.098110  195912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 19:49:43.098543  195912 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 19:49:43.098655  195912 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 19:49:43.098908  195912 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 19:49:43.098917  195912 kubeadm.go:319] 
	I1213 19:49:43.098998  195912 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 19:49:43.099066  195912 kubeadm.go:403] duration metric: took 12m10.617541701s to StartCluster
	I1213 19:49:43.099119  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:49:43.099182  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:49:43.124755  195912 cri.go:89] found id: ""
	I1213 19:49:43.124777  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.124786  195912 logs.go:284] No container was found matching "kube-apiserver"
	I1213 19:49:43.124792  195912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:49:43.124864  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:49:43.150651  195912 cri.go:89] found id: ""
	I1213 19:49:43.150677  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.150686  195912 logs.go:284] No container was found matching "etcd"
	I1213 19:49:43.150692  195912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:49:43.150750  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:49:43.179821  195912 cri.go:89] found id: ""
	I1213 19:49:43.179844  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.179853  195912 logs.go:284] No container was found matching "coredns"
	I1213 19:49:43.179859  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:49:43.179917  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:49:43.206244  195912 cri.go:89] found id: ""
	I1213 19:49:43.206266  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.206274  195912 logs.go:284] No container was found matching "kube-scheduler"
	I1213 19:49:43.206281  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:49:43.206337  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:49:43.233517  195912 cri.go:89] found id: ""
	I1213 19:49:43.233542  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.233551  195912 logs.go:284] No container was found matching "kube-proxy"
	I1213 19:49:43.233557  195912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:49:43.233619  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:49:43.260154  195912 cri.go:89] found id: ""
	I1213 19:49:43.260182  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.260190  195912 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 19:49:43.260196  195912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:49:43.260258  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:49:43.288804  195912 cri.go:89] found id: ""
	I1213 19:49:43.288833  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.288842  195912 logs.go:284] No container was found matching "kindnet"
	I1213 19:49:43.288847  195912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 19:49:43.288945  195912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 19:49:43.316457  195912 cri.go:89] found id: ""
	I1213 19:49:43.316486  195912 logs.go:282] 0 containers: []
	W1213 19:49:43.316495  195912 logs.go:284] No container was found matching "storage-provisioner"
	I1213 19:49:43.316505  195912 logs.go:123] Gathering logs for kubelet ...
	I1213 19:49:43.316516  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 19:49:43.413739  195912 logs.go:123] Gathering logs for dmesg ...
	I1213 19:49:43.413790  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:49:43.430594  195912 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:49:43.430634  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 19:49:43.504376  195912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 19:49:43.504397  195912 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:49:43.504409  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:49:43.538085  195912 logs.go:123] Gathering logs for container status ...
	I1213 19:49:43.538123  195912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 19:49:43.571637  195912 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001935047s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 19:49:43.571694  195912 out.go:285] * 
	W1213 19:49:43.571750  195912 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001935047s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 19:49:43.571782  195912 out.go:285] * 
	W1213 19:49:43.574330  195912 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 19:49:43.579795  195912 out.go:203] 
	W1213 19:49:43.582736  195912 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001935047s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 19:49:43.582816  195912 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 19:49:43.582845  195912 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 19:49:43.585949  195912 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 19:37:26 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:37:26.106402483Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 19:37:26 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:37:26.106441318Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 19:37:26 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:37:26.106477248Z" level=info msg="Create NRI interface"
	Dec 13 19:37:26 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:37:26.106570525Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 19:37:26 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:37:26.106578508Z" level=info msg="runtime interface created"
	Dec 13 19:37:26 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:37:26.106589101Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 19:37:26 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:37:26.10659464Z" level=info msg="runtime interface starting up..."
	Dec 13 19:37:26 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:37:26.106600301Z" level=info msg="starting plugins..."
	Dec 13 19:37:26 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:37:26.106612215Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 19:37:26 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:37:26.106665582Z" level=info msg="No systemd watchdog enabled"
	Dec 13 19:37:26 kubernetes-upgrade-203932 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 19:41:37 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:41:37.732040423Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=710c412e-822c-494e-8c66-2f1b1e7486eb name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:41:37 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:41:37.732962331Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e28400bd-d45b-4c03-9f00-487bedaa77cb name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:41:37 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:41:37.733701896Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=03f856e0-d594-47e8-a64d-7ed2d2c03785 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:41:37 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:41:37.734700186Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=c8deff2e-901a-4651-8cd2-4537d3015ddd name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:41:37 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:41:37.741852541Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=90c1a0a6-47ca-42f8-af03-0ed881e3d73e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:41:37 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:41:37.742490354Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=304850b9-9309-4e58-98fb-d492e4499903 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:41:37 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:41:37.743018373Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=996d834d-e7fa-44c7-9f0a-638eaa74db4f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:45:40 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:45:40.860768084Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=732328d6-37de-46b5-bf9f-8e91c9a9ca27 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:45:40 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:45:40.864060966Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=6e2c27da-1c73-4729-9bdf-5d912fadcf09 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:45:40 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:45:40.864742643Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=ada6ef85-fc35-478b-983c-293ba394d8b4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:45:40 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:45:40.865295097Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=87adf544-1d51-4a6d-a41a-83b97026b5f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:45:40 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:45:40.865785906Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=9456ee9f-f89d-454f-a148-74e2c8372208 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:45:40 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:45:40.866265941Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f5b4b6d9-ed93-4217-a463-eb04f330e38a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:45:40 kubernetes-upgrade-203932 crio[614]: time="2025-12-13T19:45:40.866733972Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5cc82230-baa2-4b45-a969-7a2cc7effc03 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 19:07] overlayfs: idmapped layers are currently not supported
	[  +4.088622] overlayfs: idmapped layers are currently not supported
	[Dec13 19:16] overlayfs: idmapped layers are currently not supported
	[Dec13 19:18] overlayfs: idmapped layers are currently not supported
	[Dec13 19:22] overlayfs: idmapped layers are currently not supported
	[Dec13 19:23] overlayfs: idmapped layers are currently not supported
	[Dec13 19:24] overlayfs: idmapped layers are currently not supported
	[Dec13 19:25] overlayfs: idmapped layers are currently not supported
	[Dec13 19:26] overlayfs: idmapped layers are currently not supported
	[Dec13 19:28] overlayfs: idmapped layers are currently not supported
	[ +16.353793] overlayfs: idmapped layers are currently not supported
	[ +17.019256] overlayfs: idmapped layers are currently not supported
	[Dec13 19:29] overlayfs: idmapped layers are currently not supported
	[Dec13 19:30] overlayfs: idmapped layers are currently not supported
	[ +42.207433] overlayfs: idmapped layers are currently not supported
	[Dec13 19:31] overlayfs: idmapped layers are currently not supported
	[Dec13 19:32] overlayfs: idmapped layers are currently not supported
	[Dec13 19:33] overlayfs: idmapped layers are currently not supported
	[Dec13 19:35] overlayfs: idmapped layers are currently not supported
	[Dec13 19:36] overlayfs: idmapped layers are currently not supported
	[Dec13 19:43] overlayfs: idmapped layers are currently not supported
	[Dec13 19:45] overlayfs: idmapped layers are currently not supported
	[Dec13 19:46] overlayfs: idmapped layers are currently not supported
	[Dec13 19:47] hrtimer: interrupt took 28121988 ns
	[ +12.126331] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:49:45 up  2:32,  0 user,  load average: 1.11, 1.91, 1.98
	Linux kubernetes-upgrade-203932 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 19:49:42 kubernetes-upgrade-203932 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 19:49:43 kubernetes-upgrade-203932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 13 19:49:43 kubernetes-upgrade-203932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 19:49:43 kubernetes-upgrade-203932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 19:49:43 kubernetes-upgrade-203932 kubelet[12324]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 19:49:43 kubernetes-upgrade-203932 kubelet[12324]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 19:49:43 kubernetes-upgrade-203932 kubelet[12324]: E1213 19:49:43.392503   12324 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 19:49:43 kubernetes-upgrade-203932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 19:49:43 kubernetes-upgrade-203932 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 19:49:44 kubernetes-upgrade-203932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 13 19:49:44 kubernetes-upgrade-203932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 19:49:44 kubernetes-upgrade-203932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 19:49:44 kubernetes-upgrade-203932 kubelet[12353]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 19:49:44 kubernetes-upgrade-203932 kubelet[12353]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 19:49:44 kubernetes-upgrade-203932 kubelet[12353]: E1213 19:49:44.156494   12353 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 19:49:44 kubernetes-upgrade-203932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 19:49:44 kubernetes-upgrade-203932 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 19:49:44 kubernetes-upgrade-203932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 13 19:49:44 kubernetes-upgrade-203932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 19:49:44 kubernetes-upgrade-203932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 19:49:44 kubernetes-upgrade-203932 kubelet[12397]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 19:49:44 kubernetes-upgrade-203932 kubelet[12397]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 19:49:44 kubernetes-upgrade-203932 kubelet[12397]: E1213 19:49:44.899513   12397 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 19:49:44 kubernetes-upgrade-203932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 19:49:44 kubernetes-upgrade-203932 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-203932 -n kubernetes-upgrade-203932
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-203932 -n kubernetes-upgrade-203932: exit status 2 (335.012774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-203932" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-203932" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-203932
E1213 19:49:45.767281    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-203932: (2.422140877s)
--- FAIL: TestKubernetesUpgrade (796.37s)

                                                
                                    
x
+
TestPause/serial/Pause (6.35s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-327125 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-327125 --alsologtostderr -v=5: exit status 80 (1.796990797s)

                                                
                                                
-- stdout --
	* Pausing node pause-327125 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:45:13.498086  224082 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:45:13.498814  224082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:45:13.498864  224082 out.go:374] Setting ErrFile to fd 2...
	I1213 19:45:13.498887  224082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:45:13.499292  224082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:45:13.499723  224082 out.go:368] Setting JSON to false
	I1213 19:45:13.500241  224082 mustload.go:66] Loading cluster: pause-327125
	I1213 19:45:13.501417  224082 config.go:182] Loaded profile config "pause-327125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:45:13.501969  224082 cli_runner.go:164] Run: docker container inspect pause-327125 --format={{.State.Status}}
	I1213 19:45:13.519510  224082 host.go:66] Checking if "pause-327125" exists ...
	I1213 19:45:13.519894  224082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:45:13.580584  224082 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 19:45:13.569091162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:45:13.581249  224082 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765613186-22122/minikube-v1.37.0-1765613186-22122-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765613186-22122-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-327125 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 19:45:13.584533  224082 out.go:179] * Pausing node pause-327125 ... 
	I1213 19:45:13.588402  224082 host.go:66] Checking if "pause-327125" exists ...
	I1213 19:45:13.588735  224082 ssh_runner.go:195] Run: systemctl --version
	I1213 19:45:13.588797  224082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:45:13.608095  224082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/pause-327125/id_rsa Username:docker}
	I1213 19:45:13.712348  224082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:45:13.727067  224082 pause.go:52] kubelet running: true
	I1213 19:45:13.727136  224082 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 19:45:13.928470  224082 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 19:45:13.928605  224082 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 19:45:13.999355  224082 cri.go:89] found id: "42beff3f4415684d2db4b4f6cd38d8017bd8b45ccc3e0fffa01fd65f6646bc7f"
	I1213 19:45:13.999379  224082 cri.go:89] found id: "b90ed5b617a8e5e9b6b1c998531c7c69f3763c7208d2c12026c5e662fbea0428"
	I1213 19:45:13.999385  224082 cri.go:89] found id: "55de777e8b51f0c3aa3fb1f964df14c259552a6dbc6091767e6e1ac531f820ce"
	I1213 19:45:13.999389  224082 cri.go:89] found id: "1a40f83317314c07f555fa39401a8be922e3f11b98c3806ff21541a00bbf5124"
	I1213 19:45:13.999393  224082 cri.go:89] found id: "dcfd38d527b6be2d39e6ea9800a55589660af1cc8f83143bc6a628b2a6cddcd8"
	I1213 19:45:13.999396  224082 cri.go:89] found id: "8f5ba1ee2810a03a1e4142f99dbd279938b0c93175c0f6e1e7cea4d27503ead4"
	I1213 19:45:13.999399  224082 cri.go:89] found id: "cf30731ee12b967c35a6cb52d0e3eb3ae3960ec63dd7bb09a968da9f43eebffb"
	I1213 19:45:13.999402  224082 cri.go:89] found id: "7220c98a72257ad1aafe49b3bb8b08900afa0ea714b2d4d6646ef31da20fa812"
	I1213 19:45:13.999406  224082 cri.go:89] found id: "1ed1dff88cccfc264be42d8a89f25edba5cdd04758cb56c2f4f47d5db62de61c"
	I1213 19:45:13.999415  224082 cri.go:89] found id: "75d5802e02f12610e502e79be2dfa4c49a2d962ac8ba1a7e6706a97f9dcc1ae1"
	I1213 19:45:13.999419  224082 cri.go:89] found id: "aaf565413e1949b50c1ec1ad4e41419d439117e48ca481c22c331764e7731b89"
	I1213 19:45:13.999422  224082 cri.go:89] found id: "76981bd3c6c8f820b72bd027ca5829b5098f61b47ed859bd1e2fd64fa786a137"
	I1213 19:45:13.999426  224082 cri.go:89] found id: "68c32cc7d3c1f50302bec49c92368e3854e8146147b27d256d8c15e40407d1b2"
	I1213 19:45:13.999430  224082 cri.go:89] found id: "8bdee30f7b308a0339fbe56206ca3d6a98e2801a472b45dc376f396eb6767b8b"
	I1213 19:45:13.999433  224082 cri.go:89] found id: ""
	I1213 19:45:13.999501  224082 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 19:45:14.010918  224082 retry.go:31] will retry after 374.488015ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T19:45:14Z" level=error msg="open /run/runc: no such file or directory"
	I1213 19:45:14.386570  224082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:45:14.399814  224082 pause.go:52] kubelet running: false
	I1213 19:45:14.399899  224082 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 19:45:14.538462  224082 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 19:45:14.538563  224082 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 19:45:14.607117  224082 cri.go:89] found id: "42beff3f4415684d2db4b4f6cd38d8017bd8b45ccc3e0fffa01fd65f6646bc7f"
	I1213 19:45:14.607140  224082 cri.go:89] found id: "b90ed5b617a8e5e9b6b1c998531c7c69f3763c7208d2c12026c5e662fbea0428"
	I1213 19:45:14.607145  224082 cri.go:89] found id: "55de777e8b51f0c3aa3fb1f964df14c259552a6dbc6091767e6e1ac531f820ce"
	I1213 19:45:14.607149  224082 cri.go:89] found id: "1a40f83317314c07f555fa39401a8be922e3f11b98c3806ff21541a00bbf5124"
	I1213 19:45:14.607157  224082 cri.go:89] found id: "dcfd38d527b6be2d39e6ea9800a55589660af1cc8f83143bc6a628b2a6cddcd8"
	I1213 19:45:14.607162  224082 cri.go:89] found id: "8f5ba1ee2810a03a1e4142f99dbd279938b0c93175c0f6e1e7cea4d27503ead4"
	I1213 19:45:14.607188  224082 cri.go:89] found id: "cf30731ee12b967c35a6cb52d0e3eb3ae3960ec63dd7bb09a968da9f43eebffb"
	I1213 19:45:14.607192  224082 cri.go:89] found id: "7220c98a72257ad1aafe49b3bb8b08900afa0ea714b2d4d6646ef31da20fa812"
	I1213 19:45:14.607195  224082 cri.go:89] found id: "1ed1dff88cccfc264be42d8a89f25edba5cdd04758cb56c2f4f47d5db62de61c"
	I1213 19:45:14.607201  224082 cri.go:89] found id: "75d5802e02f12610e502e79be2dfa4c49a2d962ac8ba1a7e6706a97f9dcc1ae1"
	I1213 19:45:14.607208  224082 cri.go:89] found id: "aaf565413e1949b50c1ec1ad4e41419d439117e48ca481c22c331764e7731b89"
	I1213 19:45:14.607211  224082 cri.go:89] found id: "76981bd3c6c8f820b72bd027ca5829b5098f61b47ed859bd1e2fd64fa786a137"
	I1213 19:45:14.607214  224082 cri.go:89] found id: "68c32cc7d3c1f50302bec49c92368e3854e8146147b27d256d8c15e40407d1b2"
	I1213 19:45:14.607217  224082 cri.go:89] found id: "8bdee30f7b308a0339fbe56206ca3d6a98e2801a472b45dc376f396eb6767b8b"
	I1213 19:45:14.607220  224082 cri.go:89] found id: ""
	I1213 19:45:14.607281  224082 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 19:45:14.618419  224082 retry.go:31] will retry after 326.964218ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T19:45:14Z" level=error msg="open /run/runc: no such file or directory"
	I1213 19:45:14.946085  224082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:45:14.959444  224082 pause.go:52] kubelet running: false
	I1213 19:45:14.959569  224082 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 19:45:15.138277  224082 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 19:45:15.138397  224082 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 19:45:15.216532  224082 cri.go:89] found id: "42beff3f4415684d2db4b4f6cd38d8017bd8b45ccc3e0fffa01fd65f6646bc7f"
	I1213 19:45:15.216564  224082 cri.go:89] found id: "b90ed5b617a8e5e9b6b1c998531c7c69f3763c7208d2c12026c5e662fbea0428"
	I1213 19:45:15.216570  224082 cri.go:89] found id: "55de777e8b51f0c3aa3fb1f964df14c259552a6dbc6091767e6e1ac531f820ce"
	I1213 19:45:15.216573  224082 cri.go:89] found id: "1a40f83317314c07f555fa39401a8be922e3f11b98c3806ff21541a00bbf5124"
	I1213 19:45:15.216577  224082 cri.go:89] found id: "dcfd38d527b6be2d39e6ea9800a55589660af1cc8f83143bc6a628b2a6cddcd8"
	I1213 19:45:15.216581  224082 cri.go:89] found id: "8f5ba1ee2810a03a1e4142f99dbd279938b0c93175c0f6e1e7cea4d27503ead4"
	I1213 19:45:15.216584  224082 cri.go:89] found id: "cf30731ee12b967c35a6cb52d0e3eb3ae3960ec63dd7bb09a968da9f43eebffb"
	I1213 19:45:15.216587  224082 cri.go:89] found id: "7220c98a72257ad1aafe49b3bb8b08900afa0ea714b2d4d6646ef31da20fa812"
	I1213 19:45:15.216590  224082 cri.go:89] found id: "1ed1dff88cccfc264be42d8a89f25edba5cdd04758cb56c2f4f47d5db62de61c"
	I1213 19:45:15.216614  224082 cri.go:89] found id: "75d5802e02f12610e502e79be2dfa4c49a2d962ac8ba1a7e6706a97f9dcc1ae1"
	I1213 19:45:15.216624  224082 cri.go:89] found id: "aaf565413e1949b50c1ec1ad4e41419d439117e48ca481c22c331764e7731b89"
	I1213 19:45:15.216628  224082 cri.go:89] found id: "76981bd3c6c8f820b72bd027ca5829b5098f61b47ed859bd1e2fd64fa786a137"
	I1213 19:45:15.216653  224082 cri.go:89] found id: "68c32cc7d3c1f50302bec49c92368e3854e8146147b27d256d8c15e40407d1b2"
	I1213 19:45:15.216661  224082 cri.go:89] found id: "8bdee30f7b308a0339fbe56206ca3d6a98e2801a472b45dc376f396eb6767b8b"
	I1213 19:45:15.216679  224082 cri.go:89] found id: ""
	I1213 19:45:15.216748  224082 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 19:45:15.231260  224082 out.go:203] 
	W1213 19:45:15.234217  224082 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T19:45:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T19:45:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 19:45:15.234240  224082 out.go:285] * 
	* 
	W1213 19:45:15.239198  224082 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 19:45:15.242084  224082 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-327125 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-327125
helpers_test.go:244: (dbg) docker inspect pause-327125:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181",
	        "Created": "2025-12-13T19:43:26.814661435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 220175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T19:43:26.893266577Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181/hostname",
	        "HostsPath": "/var/lib/docker/containers/df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181/hosts",
	        "LogPath": "/var/lib/docker/containers/df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181/df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181-json.log",
	        "Name": "/pause-327125",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-327125:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-327125",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181",
	                "LowerDir": "/var/lib/docker/overlay2/926ec214a9aead4df2c0cd0cdb4af9c4a51e20d3f781947ec935a936412113c3-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/926ec214a9aead4df2c0cd0cdb4af9c4a51e20d3f781947ec935a936412113c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/926ec214a9aead4df2c0cd0cdb4af9c4a51e20d3f781947ec935a936412113c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/926ec214a9aead4df2c0cd0cdb4af9c4a51e20d3f781947ec935a936412113c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-327125",
	                "Source": "/var/lib/docker/volumes/pause-327125/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-327125",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-327125",
	                "name.minikube.sigs.k8s.io": "pause-327125",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b7bc60ac907d9319abcb07fc89111bc1bbaa28370d282bf032c477efe14ec24",
	            "SandboxKey": "/var/run/docker/netns/8b7bc60ac907",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-327125": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:0c:07:f3:01:18",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c7be3b78199daca1c75d376ac38565212667e25c95a34efc8475f8ae1f2894dc",
	                    "EndpointID": "0f4a96fede720f51c2687113de577845dcc4a5c2ed7aceca185bf76727dd2e88",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-327125",
	                        "df299981377d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-327125 -n pause-327125
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-327125 -n pause-327125: exit status 2 (350.60746ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-327125 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-327125 logs -n 25: (1.379040864s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-255151 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:35 UTC │ 13 Dec 25 19:35 UTC │
	│ start   │ -p missing-upgrade-208144 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-208144    │ jenkins │ v1.35.0 │ 13 Dec 25 19:35 UTC │ 13 Dec 25 19:36 UTC │
	│ start   │ -p NoKubernetes-255151 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:36 UTC │
	│ delete  │ -p NoKubernetes-255151                                                                                                                          │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:36 UTC │
	│ start   │ -p NoKubernetes-255151 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:36 UTC │
	│ ssh     │ -p NoKubernetes-255151 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │                     │
	│ start   │ -p missing-upgrade-208144 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-208144    │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:37 UTC │
	│ stop    │ -p NoKubernetes-255151                                                                                                                          │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:36 UTC │
	│ start   │ -p NoKubernetes-255151 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:36 UTC │
	│ ssh     │ -p NoKubernetes-255151 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │                     │
	│ delete  │ -p NoKubernetes-255151                                                                                                                          │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:36 UTC │
	│ start   │ -p kubernetes-upgrade-203932 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-203932 │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:37 UTC │
	│ delete  │ -p missing-upgrade-208144                                                                                                                       │ missing-upgrade-208144    │ jenkins │ v1.37.0 │ 13 Dec 25 19:37 UTC │ 13 Dec 25 19:37 UTC │
	│ stop    │ -p kubernetes-upgrade-203932                                                                                                                    │ kubernetes-upgrade-203932 │ jenkins │ v1.37.0 │ 13 Dec 25 19:37 UTC │ 13 Dec 25 19:37 UTC │
	│ start   │ -p kubernetes-upgrade-203932 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-203932 │ jenkins │ v1.37.0 │ 13 Dec 25 19:37 UTC │                     │
	│ start   │ -p stopped-upgrade-825838 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-825838    │ jenkins │ v1.35.0 │ 13 Dec 25 19:37 UTC │ 13 Dec 25 19:37 UTC │
	│ stop    │ stopped-upgrade-825838 stop                                                                                                                     │ stopped-upgrade-825838    │ jenkins │ v1.35.0 │ 13 Dec 25 19:37 UTC │ 13 Dec 25 19:37 UTC │
	│ start   │ -p stopped-upgrade-825838 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-825838    │ jenkins │ v1.37.0 │ 13 Dec 25 19:37 UTC │ 13 Dec 25 19:42 UTC │
	│ delete  │ -p stopped-upgrade-825838                                                                                                                       │ stopped-upgrade-825838    │ jenkins │ v1.37.0 │ 13 Dec 25 19:42 UTC │ 13 Dec 25 19:42 UTC │
	│ start   │ -p running-upgrade-947759 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-947759    │ jenkins │ v1.35.0 │ 13 Dec 25 19:42 UTC │ 13 Dec 25 19:42 UTC │
	│ start   │ -p running-upgrade-947759 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-947759    │ jenkins │ v1.37.0 │ 13 Dec 25 19:42 UTC │ 13 Dec 25 19:43 UTC │
	│ delete  │ -p running-upgrade-947759                                                                                                                       │ running-upgrade-947759    │ jenkins │ v1.37.0 │ 13 Dec 25 19:43 UTC │ 13 Dec 25 19:43 UTC │
	│ start   │ -p pause-327125 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-327125              │ jenkins │ v1.37.0 │ 13 Dec 25 19:43 UTC │ 13 Dec 25 19:44 UTC │
	│ start   │ -p pause-327125 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-327125              │ jenkins │ v1.37.0 │ 13 Dec 25 19:44 UTC │ 13 Dec 25 19:45 UTC │
	│ pause   │ -p pause-327125 --alsologtostderr -v=5                                                                                                          │ pause-327125              │ jenkins │ v1.37.0 │ 13 Dec 25 19:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 19:44:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:44:43.254898  222758 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:44:43.255014  222758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:44:43.255023  222758 out.go:374] Setting ErrFile to fd 2...
	I1213 19:44:43.255027  222758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:44:43.255307  222758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:44:43.255654  222758 out.go:368] Setting JSON to false
	I1213 19:44:43.256577  222758 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8836,"bootTime":1765646248,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 19:44:43.256645  222758 start.go:143] virtualization:  
	I1213 19:44:43.259679  222758 out.go:179] * [pause-327125] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 19:44:43.263586  222758 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 19:44:43.263697  222758 notify.go:221] Checking for updates...
	I1213 19:44:43.271243  222758 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:44:43.274276  222758 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:44:43.277105  222758 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 19:44:43.280003  222758 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 19:44:43.283045  222758 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:44:43.286313  222758 config.go:182] Loaded profile config "pause-327125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:44:43.286875  222758 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 19:44:43.315382  222758 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 19:44:43.315508  222758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:44:43.372337  222758 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 19:44:43.362785272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:44:43.372455  222758 docker.go:319] overlay module found
	I1213 19:44:43.375630  222758 out.go:179] * Using the docker driver based on existing profile
	I1213 19:44:43.378523  222758 start.go:309] selected driver: docker
	I1213 19:44:43.378543  222758 start.go:927] validating driver "docker" against &{Name:pause-327125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-327125 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:44:43.378675  222758 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:44:43.378796  222758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:44:43.433118  222758 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 19:44:43.423005553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:44:43.433660  222758 cni.go:84] Creating CNI manager for ""
	I1213 19:44:43.433730  222758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:44:43.433784  222758 start.go:353] cluster config:
	{Name:pause-327125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-327125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:44:43.438798  222758 out.go:179] * Starting "pause-327125" primary control-plane node in "pause-327125" cluster
	I1213 19:44:43.441680  222758 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:44:43.444711  222758 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:44:43.447581  222758 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:44:43.447629  222758 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 19:44:43.447639  222758 cache.go:65] Caching tarball of preloaded images
	I1213 19:44:43.447666  222758 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:44:43.447724  222758 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:44:43.447734  222758 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 19:44:43.447879  222758 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/config.json ...
	I1213 19:44:43.466955  222758 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:44:43.466977  222758 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:44:43.466997  222758 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:44:43.467025  222758 start.go:360] acquireMachinesLock for pause-327125: {Name:mka7d8a1169e3d541c2f31839cc969c6ea065386 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:44:43.467093  222758 start.go:364] duration metric: took 41.28µs to acquireMachinesLock for "pause-327125"
	I1213 19:44:43.467117  222758 start.go:96] Skipping create...Using existing machine configuration
	I1213 19:44:43.467129  222758 fix.go:54] fixHost starting: 
	I1213 19:44:43.467392  222758 cli_runner.go:164] Run: docker container inspect pause-327125 --format={{.State.Status}}
	I1213 19:44:43.483933  222758 fix.go:112] recreateIfNeeded on pause-327125: state=Running err=<nil>
	W1213 19:44:43.483969  222758 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 19:44:43.487108  222758 out.go:252] * Updating the running docker "pause-327125" container ...
	I1213 19:44:43.487149  222758 machine.go:94] provisionDockerMachine start ...
	I1213 19:44:43.487229  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:43.506354  222758 main.go:143] libmachine: Using SSH client type: native
	I1213 19:44:43.506679  222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1213 19:44:43.506695  222758 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:44:43.656851  222758 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-327125
	
	I1213 19:44:43.656879  222758 ubuntu.go:182] provisioning hostname "pause-327125"
	I1213 19:44:43.656941  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:43.675831  222758 main.go:143] libmachine: Using SSH client type: native
	I1213 19:44:43.676144  222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1213 19:44:43.676155  222758 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-327125 && echo "pause-327125" | sudo tee /etc/hostname
	I1213 19:44:43.845971  222758 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-327125
	
	I1213 19:44:43.846069  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:43.871939  222758 main.go:143] libmachine: Using SSH client type: native
	I1213 19:44:43.872256  222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1213 19:44:43.872278  222758 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-327125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-327125/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-327125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:44:44.041437  222758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:44:44.041470  222758 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:44:44.041503  222758 ubuntu.go:190] setting up certificates
	I1213 19:44:44.041512  222758 provision.go:84] configureAuth start
	I1213 19:44:44.041587  222758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-327125
	I1213 19:44:44.060050  222758 provision.go:143] copyHostCerts
	I1213 19:44:44.060122  222758 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:44:44.060131  222758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:44:44.060208  222758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:44:44.060312  222758 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:44:44.060317  222758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:44:44.060345  222758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:44:44.060410  222758 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:44:44.060414  222758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:44:44.060438  222758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:44:44.060491  222758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.pause-327125 san=[127.0.0.1 192.168.85.2 localhost minikube pause-327125]
	I1213 19:44:44.195665  222758 provision.go:177] copyRemoteCerts
	I1213 19:44:44.195742  222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:44:44.195778  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:44.213677  222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/pause-327125/id_rsa Username:docker}
	I1213 19:44:44.321575  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 19:44:44.339711  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 19:44:44.360780  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:44:44.379613  222758 provision.go:87] duration metric: took 338.079115ms to configureAuth
	I1213 19:44:44.379640  222758 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:44:44.379861  222758 config.go:182] Loaded profile config "pause-327125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:44:44.379953  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:44.398782  222758 main.go:143] libmachine: Using SSH client type: native
	I1213 19:44:44.399106  222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1213 19:44:44.399120  222758 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:44:49.770115  222758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:44:49.770148  222758 machine.go:97] duration metric: took 6.282990346s to provisionDockerMachine
	I1213 19:44:49.770160  222758 start.go:293] postStartSetup for "pause-327125" (driver="docker")
	I1213 19:44:49.770171  222758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:44:49.770244  222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:44:49.770291  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:49.790601  222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/pause-327125/id_rsa Username:docker}
	I1213 19:44:49.897250  222758 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:44:49.900737  222758 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:44:49.900773  222758 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:44:49.900785  222758 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:44:49.900837  222758 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:44:49.900920  222758 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:44:49.901148  222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:44:49.909135  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:44:49.927212  222758 start.go:296] duration metric: took 157.036913ms for postStartSetup
	I1213 19:44:49.927291  222758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:44:49.927349  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:49.944931  222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/pause-327125/id_rsa Username:docker}
	I1213 19:44:50.058747  222758 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:44:50.064244  222758 fix.go:56] duration metric: took 6.597108627s for fixHost
	I1213 19:44:50.064285  222758 start.go:83] releasing machines lock for "pause-327125", held for 6.597166671s
	I1213 19:44:50.064356  222758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-327125
	I1213 19:44:50.081775  222758 ssh_runner.go:195] Run: cat /version.json
	I1213 19:44:50.081845  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:50.081848  222758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:44:50.081920  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:50.103964  222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/pause-327125/id_rsa Username:docker}
	I1213 19:44:50.103964  222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/pause-327125/id_rsa Username:docker}
	I1213 19:44:50.209564  222758 ssh_runner.go:195] Run: systemctl --version
	I1213 19:44:50.297918  222758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:44:50.338112  222758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:44:50.342517  222758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:44:50.342591  222758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:44:50.351208  222758 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 19:44:50.351235  222758 start.go:496] detecting cgroup driver to use...
	I1213 19:44:50.351265  222758 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:44:50.351319  222758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:44:50.366166  222758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:44:50.379395  222758 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:44:50.379462  222758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:44:50.394972  222758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:44:50.407809  222758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:44:50.544599  222758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:44:50.705991  222758 docker.go:234] disabling docker service ...
	I1213 19:44:50.706058  222758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:44:50.721001  222758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:44:50.734274  222758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:44:50.866646  222758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:44:51.007402  222758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:44:51.022578  222758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:44:51.036927  222758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:44:51.037084  222758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.047017  222758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:44:51.047088  222758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.057189  222758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.066535  222758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.076017  222758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:44:51.084520  222758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.094261  222758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.103462  222758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.112885  222758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:44:51.120922  222758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:44:51.128996  222758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:44:51.256437  222758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:44:51.481764  222758 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:44:51.481868  222758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:44:51.485841  222758 start.go:564] Will wait 60s for crictl version
	I1213 19:44:51.485905  222758 ssh_runner.go:195] Run: which crictl
	I1213 19:44:51.489413  222758 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:44:51.513454  222758 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:44:51.513565  222758 ssh_runner.go:195] Run: crio --version
	I1213 19:44:51.541229  222758 ssh_runner.go:195] Run: crio --version
	I1213 19:44:51.571059  222758 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 19:44:51.574084  222758 cli_runner.go:164] Run: docker network inspect pause-327125 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:44:51.590544  222758 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 19:44:51.594446  222758 kubeadm.go:884] updating cluster {Name:pause-327125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-327125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:44:51.594590  222758 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:44:51.594654  222758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:44:51.626769  222758 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:44:51.626796  222758 crio.go:433] Images already preloaded, skipping extraction
	I1213 19:44:51.626851  222758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:44:51.651922  222758 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:44:51.651947  222758 cache_images.go:86] Images are preloaded, skipping loading
	I1213 19:44:51.651955  222758 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1213 19:44:51.652062  222758 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-327125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-327125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:44:51.652137  222758 ssh_runner.go:195] Run: crio config
	I1213 19:44:51.721171  222758 cni.go:84] Creating CNI manager for ""
	I1213 19:44:51.721242  222758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:44:51.721273  222758 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 19:44:51.721325  222758 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-327125 NodeName:pause-327125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:44:51.721512  222758 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-327125"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:44:51.721629  222758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 19:44:51.729218  222758 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:44:51.729294  222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 19:44:51.736627  222758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1213 19:44:51.749223  222758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:44:51.761927  222758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1213 19:44:51.774534  222758 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 19:44:51.778297  222758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:44:51.911710  222758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:44:51.925597  222758 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125 for IP: 192.168.85.2
	I1213 19:44:51.925616  222758 certs.go:195] generating shared ca certs ...
	I1213 19:44:51.925631  222758 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:44:51.925752  222758 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:44:51.925794  222758 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:44:51.925801  222758 certs.go:257] generating profile certs ...
	I1213 19:44:51.925885  222758 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/client.key
	I1213 19:44:51.925957  222758 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/apiserver.key.0d7ed32c
	I1213 19:44:51.925997  222758 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/proxy-client.key
	I1213 19:44:51.926104  222758 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:44:51.926156  222758 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:44:51.926165  222758 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:44:51.926191  222758 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:44:51.926216  222758 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:44:51.926238  222758 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:44:51.926282  222758 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:44:51.926874  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:44:51.945113  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:44:51.963664  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:44:51.986349  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:44:52.006266  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 19:44:52.024906  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 19:44:52.043591  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:44:52.062159  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:44:52.085594  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:44:52.106848  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:44:52.127642  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:44:52.149968  222758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:44:52.162941  222758 ssh_runner.go:195] Run: openssl version
	I1213 19:44:52.169638  222758 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:44:52.176903  222758 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:44:52.184542  222758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:44:52.188335  222758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:44:52.188403  222758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:44:52.229460  222758 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:44:52.236848  222758 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:44:52.243954  222758 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:44:52.251477  222758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:44:52.255401  222758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:44:52.255490  222758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:44:52.296614  222758 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:44:52.304269  222758 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:44:52.311847  222758 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:44:52.319655  222758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:44:52.323605  222758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:44:52.323672  222758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:44:52.369297  222758 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:44:52.376621  222758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:44:52.380334  222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 19:44:52.420950  222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 19:44:52.461622  222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 19:44:52.502658  222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 19:44:52.543528  222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 19:44:52.584342  222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 19:44:52.625226  222758 kubeadm.go:401] StartCluster: {Name:pause-327125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-327125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:44:52.625341  222758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:44:52.625406  222758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:44:52.653541  222758 cri.go:89] found id: "7220c98a72257ad1aafe49b3bb8b08900afa0ea714b2d4d6646ef31da20fa812"
	I1213 19:44:52.653563  222758 cri.go:89] found id: "1ed1dff88cccfc264be42d8a89f25edba5cdd04758cb56c2f4f47d5db62de61c"
	I1213 19:44:52.653568  222758 cri.go:89] found id: "75d5802e02f12610e502e79be2dfa4c49a2d962ac8ba1a7e6706a97f9dcc1ae1"
	I1213 19:44:52.653572  222758 cri.go:89] found id: "aaf565413e1949b50c1ec1ad4e41419d439117e48ca481c22c331764e7731b89"
	I1213 19:44:52.653576  222758 cri.go:89] found id: "76981bd3c6c8f820b72bd027ca5829b5098f61b47ed859bd1e2fd64fa786a137"
	I1213 19:44:52.653579  222758 cri.go:89] found id: "68c32cc7d3c1f50302bec49c92368e3854e8146147b27d256d8c15e40407d1b2"
	I1213 19:44:52.653582  222758 cri.go:89] found id: "8bdee30f7b308a0339fbe56206ca3d6a98e2801a472b45dc376f396eb6767b8b"
	I1213 19:44:52.653586  222758 cri.go:89] found id: ""
	I1213 19:44:52.653661  222758 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 19:44:52.664577  222758 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T19:44:52Z" level=error msg="open /run/runc: no such file or directory"
	I1213 19:44:52.664675  222758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:44:52.672683  222758 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 19:44:52.672754  222758 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 19:44:52.672843  222758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 19:44:52.680249  222758 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:44:52.680939  222758 kubeconfig.go:125] found "pause-327125" server: "https://192.168.85.2:8443"
	I1213 19:44:52.681722  222758 kapi.go:59] client config for pause-327125: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 19:44:52.682227  222758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 19:44:52.682251  222758 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 19:44:52.682258  222758 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 19:44:52.682268  222758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 19:44:52.682272  222758 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 19:44:52.682530  222758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 19:44:52.689970  222758 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 19:44:52.690004  222758 kubeadm.go:602] duration metric: took 17.230445ms to restartPrimaryControlPlane
	I1213 19:44:52.690015  222758 kubeadm.go:403] duration metric: took 64.796977ms to StartCluster
	I1213 19:44:52.690059  222758 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:44:52.690157  222758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:44:52.691011  222758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:44:52.691257  222758 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:44:52.691533  222758 config.go:182] Loaded profile config "pause-327125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:44:52.691602  222758 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 19:44:52.695701  222758 out.go:179] * Verifying Kubernetes components...
	I1213 19:44:52.695706  222758 out.go:179] * Enabled addons: 
	I1213 19:44:52.698645  222758 addons.go:530] duration metric: took 7.039322ms for enable addons: enabled=[]
	I1213 19:44:52.698752  222758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:44:52.828837  222758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:44:52.843802  222758 node_ready.go:35] waiting up to 6m0s for node "pause-327125" to be "Ready" ...
	I1213 19:44:57.987257  222758 node_ready.go:49] node "pause-327125" is "Ready"
	I1213 19:44:57.987284  222758 node_ready.go:38] duration metric: took 5.143445481s for node "pause-327125" to be "Ready" ...
	I1213 19:44:57.987297  222758 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:44:57.987358  222758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:44:58.001627  222758 api_server.go:72] duration metric: took 5.310333758s to wait for apiserver process to appear ...
	I1213 19:44:58.001651  222758 api_server.go:88] waiting for apiserver healthz status ...
	I1213 19:44:58.001674  222758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 19:44:58.116623  222758 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 19:44:58.116713  222758 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 19:44:58.502527  222758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 19:44:58.511043  222758 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 19:44:58.511079  222758 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 19:44:59.002771  222758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 19:44:59.010901  222758 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 19:44:59.011943  222758 api_server.go:141] control plane version: v1.34.2
	I1213 19:44:59.011968  222758 api_server.go:131] duration metric: took 1.010309059s to wait for apiserver health ...
	I1213 19:44:59.011978  222758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 19:44:59.015965  222758 system_pods.go:59] 7 kube-system pods found
	I1213 19:44:59.016040  222758 system_pods.go:61] "coredns-66bc5c9577-n9958" [22e2b1b5-7a27-4ca8-89e0-cce8c2000a1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 19:44:59.016063  222758 system_pods.go:61] "etcd-pause-327125" [edef5c87-0b37-4ffb-9d94-2cd8e868dd10] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 19:44:59.016070  222758 system_pods.go:61] "kindnet-rrvxm" [404728d3-6e60-4ffd-8fde-d04cd97b1d71] Running
	I1213 19:44:59.016090  222758 system_pods.go:61] "kube-apiserver-pause-327125" [308c3248-32c3-4c14-96f0-ea35adf20b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 19:44:59.016098  222758 system_pods.go:61] "kube-controller-manager-pause-327125" [3bab01e9-17c7-4f72-ae08-e99d2710dc00] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 19:44:59.016112  222758 system_pods.go:61] "kube-proxy-wm755" [448b8703-4d1c-436d-8066-34855c077030] Running
	I1213 19:44:59.016122  222758 system_pods.go:61] "kube-scheduler-pause-327125" [9154159c-a53b-4594-a0ab-2084e70b508d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 19:44:59.016129  222758 system_pods.go:74] duration metric: took 4.143742ms to wait for pod list to return data ...
	I1213 19:44:59.016140  222758 default_sa.go:34] waiting for default service account to be created ...
	I1213 19:44:59.021743  222758 default_sa.go:45] found service account: "default"
	I1213 19:44:59.021772  222758 default_sa.go:55] duration metric: took 5.626095ms for default service account to be created ...
	I1213 19:44:59.021783  222758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 19:44:59.025538  222758 system_pods.go:86] 7 kube-system pods found
	I1213 19:44:59.025575  222758 system_pods.go:89] "coredns-66bc5c9577-n9958" [22e2b1b5-7a27-4ca8-89e0-cce8c2000a1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 19:44:59.025585  222758 system_pods.go:89] "etcd-pause-327125" [edef5c87-0b37-4ffb-9d94-2cd8e868dd10] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 19:44:59.025590  222758 system_pods.go:89] "kindnet-rrvxm" [404728d3-6e60-4ffd-8fde-d04cd97b1d71] Running
	I1213 19:44:59.025596  222758 system_pods.go:89] "kube-apiserver-pause-327125" [308c3248-32c3-4c14-96f0-ea35adf20b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 19:44:59.025604  222758 system_pods.go:89] "kube-controller-manager-pause-327125" [3bab01e9-17c7-4f72-ae08-e99d2710dc00] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 19:44:59.025608  222758 system_pods.go:89] "kube-proxy-wm755" [448b8703-4d1c-436d-8066-34855c077030] Running
	I1213 19:44:59.025614  222758 system_pods.go:89] "kube-scheduler-pause-327125" [9154159c-a53b-4594-a0ab-2084e70b508d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 19:44:59.025622  222758 system_pods.go:126] duration metric: took 3.832848ms to wait for k8s-apps to be running ...
	I1213 19:44:59.025630  222758 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 19:44:59.025687  222758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:44:59.039180  222758 system_svc.go:56] duration metric: took 13.538759ms WaitForService to wait for kubelet
	I1213 19:44:59.039208  222758 kubeadm.go:587] duration metric: took 6.347918333s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:44:59.039228  222758 node_conditions.go:102] verifying NodePressure condition ...
	I1213 19:44:59.042507  222758 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 19:44:59.042542  222758 node_conditions.go:123] node cpu capacity is 2
	I1213 19:44:59.042556  222758 node_conditions.go:105] duration metric: took 3.324086ms to run NodePressure ...
	I1213 19:44:59.042569  222758 start.go:242] waiting for startup goroutines ...
	I1213 19:44:59.042576  222758 start.go:247] waiting for cluster config update ...
	I1213 19:44:59.042584  222758 start.go:256] writing updated cluster config ...
	I1213 19:44:59.042902  222758 ssh_runner.go:195] Run: rm -f paused
	I1213 19:44:59.046625  222758 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 19:44:59.047255  222758 kapi.go:59] client config for pause-327125: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 19:44:59.115999  222758 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n9958" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 19:45:01.122664  222758 pod_ready.go:104] pod "coredns-66bc5c9577-n9958" is not "Ready", error: <nil>
	W1213 19:45:03.622466  222758 pod_ready.go:104] pod "coredns-66bc5c9577-n9958" is not "Ready", error: <nil>
	I1213 19:45:05.622742  222758 pod_ready.go:94] pod "coredns-66bc5c9577-n9958" is "Ready"
	I1213 19:45:05.622768  222758 pod_ready.go:86] duration metric: took 6.506738651s for pod "coredns-66bc5c9577-n9958" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:05.625727  222758 pod_ready.go:83] waiting for pod "etcd-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 19:45:07.631323  222758 pod_ready.go:104] pod "etcd-pause-327125" is not "Ready", error: <nil>
	I1213 19:45:08.132200  222758 pod_ready.go:94] pod "etcd-pause-327125" is "Ready"
	I1213 19:45:08.132271  222758 pod_ready.go:86] duration metric: took 2.506516054s for pod "etcd-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:08.134844  222758 pod_ready.go:83] waiting for pod "kube-apiserver-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 19:45:10.140924  222758 pod_ready.go:104] pod "kube-apiserver-pause-327125" is not "Ready", error: <nil>
	W1213 19:45:12.141189  222758 pod_ready.go:104] pod "kube-apiserver-pause-327125" is not "Ready", error: <nil>
	I1213 19:45:13.140442  222758 pod_ready.go:94] pod "kube-apiserver-pause-327125" is "Ready"
	I1213 19:45:13.140474  222758 pod_ready.go:86] duration metric: took 5.005603448s for pod "kube-apiserver-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.142841  222758 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.147624  222758 pod_ready.go:94] pod "kube-controller-manager-pause-327125" is "Ready"
	I1213 19:45:13.147655  222758 pod_ready.go:86] duration metric: took 4.78619ms for pod "kube-controller-manager-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.150817  222758 pod_ready.go:83] waiting for pod "kube-proxy-wm755" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.157333  222758 pod_ready.go:94] pod "kube-proxy-wm755" is "Ready"
	I1213 19:45:13.157363  222758 pod_ready.go:86] duration metric: took 6.51593ms for pod "kube-proxy-wm755" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.159947  222758 pod_ready.go:83] waiting for pod "kube-scheduler-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.339143  222758 pod_ready.go:94] pod "kube-scheduler-pause-327125" is "Ready"
	I1213 19:45:13.339171  222758 pod_ready.go:86] duration metric: took 179.200709ms for pod "kube-scheduler-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.339186  222758 pod_ready.go:40] duration metric: took 14.292528046s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 19:45:13.409482  222758 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 19:45:13.412468  222758 out.go:179] * Done! kubectl is now configured to use "pause-327125" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.402430628Z" level=info msg="Started container" PID=2367 containerID=dcfd38d527b6be2d39e6ea9800a55589660af1cc8f83143bc6a628b2a6cddcd8 description=kube-system/kube-apiserver-pause-327125/kube-apiserver id=44055caf-334e-4343-a100-9300ab4bafe0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a8ec81e982d49d4b18696c5338d90a51e7771563607dbf05511fa05876ab4ec8
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.415226672Z" level=info msg="Started container" PID=2375 containerID=1a40f83317314c07f555fa39401a8be922e3f11b98c3806ff21541a00bbf5124 description=kube-system/kube-proxy-wm755/kube-proxy id=22330665-5da2-4c48-8b82-e7c4d8c41f28 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dcf70b4b17a00869ec1ce46f2d13b8143a05d50e21eba3f19613a6fffbd71ca8
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.43956802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.452258316Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.474792927Z" level=info msg="Created container b90ed5b617a8e5e9b6b1c998531c7c69f3763c7208d2c12026c5e662fbea0428: kube-system/etcd-pause-327125/etcd" id=d9656fde-0d9e-4750-a1ef-84af9563ad9a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.485135641Z" level=info msg="Starting container: b90ed5b617a8e5e9b6b1c998531c7c69f3763c7208d2c12026c5e662fbea0428" id=96a64787-e990-4e7f-bb05-d1bd53af4cbc name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.489210583Z" level=info msg="Created container 55de777e8b51f0c3aa3fb1f964df14c259552a6dbc6091767e6e1ac531f820ce: kube-system/kindnet-rrvxm/kindnet-cni" id=9074a29b-ad0d-4365-a6d1-ccac31cb722b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.493627417Z" level=info msg="Started container" PID=2402 containerID=b90ed5b617a8e5e9b6b1c998531c7c69f3763c7208d2c12026c5e662fbea0428 description=kube-system/etcd-pause-327125/etcd id=96a64787-e990-4e7f-bb05-d1bd53af4cbc name=/runtime.v1.RuntimeService/StartContainer sandboxID=83fc490189c11dc033e72c21b3b1dba38236ce7a6ef466d1c470a6859b07a4cd
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.49815882Z" level=info msg="Starting container: 55de777e8b51f0c3aa3fb1f964df14c259552a6dbc6091767e6e1ac531f820ce" id=2daae56d-b5ff-4d91-8687-3a331d5502ed name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.508189894Z" level=info msg="Started container" PID=2395 containerID=55de777e8b51f0c3aa3fb1f964df14c259552a6dbc6091767e6e1ac531f820ce description=kube-system/kindnet-rrvxm/kindnet-cni id=2daae56d-b5ff-4d91-8687-3a331d5502ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=6bf9cbde47db43e67f53be9aae10efd0148660f01dc1d6d8000d48e6a2e98570
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.524215862Z" level=info msg="Created container 42beff3f4415684d2db4b4f6cd38d8017bd8b45ccc3e0fffa01fd65f6646bc7f: kube-system/coredns-66bc5c9577-n9958/coredns" id=b0b6acd4-db5e-47f8-a6e0-163a6d417a94 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.525131608Z" level=info msg="Starting container: 42beff3f4415684d2db4b4f6cd38d8017bd8b45ccc3e0fffa01fd65f6646bc7f" id=cde1be8e-9d51-470f-bbe2-296f0d91abbf name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.527663231Z" level=info msg="Started container" PID=2414 containerID=42beff3f4415684d2db4b4f6cd38d8017bd8b45ccc3e0fffa01fd65f6646bc7f description=kube-system/coredns-66bc5c9577-n9958/coredns id=cde1be8e-9d51-470f-bbe2-296f0d91abbf name=/runtime.v1.RuntimeService/StartContainer sandboxID=0dec3714476f5a6dc95d08d2264e0ae2eea3d92f2082d134b24bb0fb7fbb9e16
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.846294085Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.849893397Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.849930854Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.849953516Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.853282788Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.853318817Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.853341611Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.856666502Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.856702703Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.856727121Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.859805537Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.85983908Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	42beff3f44156       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   0dec3714476f5       coredns-66bc5c9577-n9958               kube-system
	b90ed5b617a8e       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   22 seconds ago       Running             etcd                      1                   83fc490189c11       etcd-pause-327125                      kube-system
	55de777e8b51f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   6bf9cbde47db4       kindnet-rrvxm                          kube-system
	1a40f83317314       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   22 seconds ago       Running             kube-proxy                1                   dcf70b4b17a00       kube-proxy-wm755                       kube-system
	dcfd38d527b6b       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   22 seconds ago       Running             kube-apiserver            1                   a8ec81e982d49       kube-apiserver-pause-327125            kube-system
	8f5ba1ee2810a       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   22 seconds ago       Running             kube-scheduler            1                   a8001cbe04bae       kube-scheduler-pause-327125            kube-system
	cf30731ee12b9       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   22 seconds ago       Running             kube-controller-manager   1                   690c6479d674d       kube-controller-manager-pause-327125   kube-system
	7220c98a72257       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   0dec3714476f5       coredns-66bc5c9577-n9958               kube-system
	1ed1dff88cccf       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   dcf70b4b17a00       kube-proxy-wm755                       kube-system
	75d5802e02f12       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   6bf9cbde47db4       kindnet-rrvxm                          kube-system
	aaf565413e194       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   83fc490189c11       etcd-pause-327125                      kube-system
	76981bd3c6c8f       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   a8001cbe04bae       kube-scheduler-pause-327125            kube-system
	68c32cc7d3c1f       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   a8ec81e982d49       kube-apiserver-pause-327125            kube-system
	8bdee30f7b308       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   690c6479d674d       kube-controller-manager-pause-327125   kube-system
	
	
	==> coredns [42beff3f4415684d2db4b4f6cd38d8017bd8b45ccc3e0fffa01fd65f6646bc7f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55652 - 7874 "HINFO IN 6840119838698813041.8823242800484221961. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026057189s
	
	
	==> coredns [7220c98a72257ad1aafe49b3bb8b08900afa0ea714b2d4d6646ef31da20fa812] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50198 - 3219 "HINFO IN 3495353353782401444.6666573847651989814. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013182039s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-327125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-327125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=pause-327125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T19_43_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 19:43:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-327125
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:45:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 19:44:40 +0000   Sat, 13 Dec 2025 19:43:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 19:44:40 +0000   Sat, 13 Dec 2025 19:43:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 19:44:40 +0000   Sat, 13 Dec 2025 19:43:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 19:44:40 +0000   Sat, 13 Dec 2025 19:44:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-327125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                2b77a055-1c8f-48a2-bab3-e094df3b8f45
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-n9958                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     78s
	  kube-system                 etcd-pause-327125                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         83s
	  kube-system                 kindnet-rrvxm                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-327125             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-327125    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-wm755                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-327125             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 75s                kube-proxy       
	  Normal   Starting                 18s                kube-proxy       
	  Warning  CgroupV1                 91s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  91s (x9 over 91s)  kubelet          Node pause-327125 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node pause-327125 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     91s (x7 over 91s)  kubelet          Node pause-327125 status is now: NodeHasSufficientPID
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node pause-327125 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node pause-327125 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s                kubelet          Node pause-327125 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           79s                node-controller  Node pause-327125 event: Registered Node pause-327125 in Controller
	  Normal   NodeReady                36s                kubelet          Node pause-327125 status is now: NodeReady
	  Normal   RegisteredNode           15s                node-controller  Node pause-327125 event: Registered Node pause-327125 in Controller
	
	
	==> dmesg <==
	[Dec13 19:05] overlayfs: idmapped layers are currently not supported
	[  +4.041925] overlayfs: idmapped layers are currently not supported
	[ +36.958854] overlayfs: idmapped layers are currently not supported
	[Dec13 19:06] overlayfs: idmapped layers are currently not supported
	[Dec13 19:07] overlayfs: idmapped layers are currently not supported
	[  +4.088622] overlayfs: idmapped layers are currently not supported
	[Dec13 19:16] overlayfs: idmapped layers are currently not supported
	[Dec13 19:18] overlayfs: idmapped layers are currently not supported
	[Dec13 19:22] overlayfs: idmapped layers are currently not supported
	[Dec13 19:23] overlayfs: idmapped layers are currently not supported
	[Dec13 19:24] overlayfs: idmapped layers are currently not supported
	[Dec13 19:25] overlayfs: idmapped layers are currently not supported
	[Dec13 19:26] overlayfs: idmapped layers are currently not supported
	[Dec13 19:28] overlayfs: idmapped layers are currently not supported
	[ +16.353793] overlayfs: idmapped layers are currently not supported
	[ +17.019256] overlayfs: idmapped layers are currently not supported
	[Dec13 19:29] overlayfs: idmapped layers are currently not supported
	[Dec13 19:30] overlayfs: idmapped layers are currently not supported
	[ +42.207433] overlayfs: idmapped layers are currently not supported
	[Dec13 19:31] overlayfs: idmapped layers are currently not supported
	[Dec13 19:32] overlayfs: idmapped layers are currently not supported
	[Dec13 19:33] overlayfs: idmapped layers are currently not supported
	[Dec13 19:35] overlayfs: idmapped layers are currently not supported
	[Dec13 19:36] overlayfs: idmapped layers are currently not supported
	[Dec13 19:43] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [aaf565413e1949b50c1ec1ad4e41419d439117e48ca481c22c331764e7731b89] <==
	{"level":"warn","ts":"2025-12-13T19:43:49.010539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:43:49.026157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:43:49.050172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:43:49.079808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:43:49.115747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:43:49.129374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:43:49.225479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56222","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T19:44:44.567184Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T19:44:44.567254Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-327125","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-13T19:44:44.567369Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T19:44:44.855322Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T19:44:44.855401Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T19:44:44.855423Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-13T19:44:44.855474Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T19:44:44.855538Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T19:44:44.855572Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T19:44:44.855583Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T19:44:44.855603Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-13T19:44:44.855695Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T19:44:44.855740Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T19:44:44.855781Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T19:44:44.858885Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-13T19:44:44.858984Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T19:44:44.859058Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-13T19:44:44.859089Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-327125","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [b90ed5b617a8e5e9b6b1c998531c7c69f3763c7208d2c12026c5e662fbea0428] <==
	{"level":"warn","ts":"2025-12-13T19:44:56.498897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.514392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.533263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.551077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.568775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.603622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.647146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.681939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.696070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.726781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.732958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.756128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.777196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.801171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.813225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.846025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.871283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.881205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.900035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.916675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.951802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.973361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.996007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:57.034385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:57.118643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42444","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:45:16 up  2:27,  0 user,  load average: 2.08, 2.30, 2.07
	Linux pause-327125 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [55de777e8b51f0c3aa3fb1f964df14c259552a6dbc6091767e6e1ac531f820ce] <==
	I1213 19:44:53.631086       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 19:44:53.631583       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 19:44:53.631757       1 main.go:148] setting mtu 1500 for CNI 
	I1213 19:44:53.631803       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 19:44:53.631843       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T19:44:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 19:44:53.846238       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 19:44:53.846268       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 19:44:53.846277       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 19:44:53.846951       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 19:44:58.046417       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 19:44:58.046467       1 metrics.go:72] Registering metrics
	I1213 19:44:58.046538       1 controller.go:711] "Syncing nftables rules"
	I1213 19:45:03.845853       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 19:45:03.845939       1 main.go:301] handling current node
	I1213 19:45:13.846153       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 19:45:13.846222       1 main.go:301] handling current node
	
	
	==> kindnet [75d5802e02f12610e502e79be2dfa4c49a2d962ac8ba1a7e6706a97f9dcc1ae1] <==
	I1213 19:43:59.926189       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 19:43:59.926577       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 19:43:59.926717       1 main.go:148] setting mtu 1500 for CNI 
	I1213 19:43:59.926733       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 19:43:59.926747       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T19:44:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 19:44:00.421188       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 19:44:00.421219       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 19:44:00.421234       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 19:44:00.421691       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 19:44:30.420844       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 19:44:30.421878       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1213 19:44:30.421888       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1213 19:44:30.421996       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1213 19:44:31.822195       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 19:44:31.822279       1 metrics.go:72] Registering metrics
	I1213 19:44:31.822355       1 controller.go:711] "Syncing nftables rules"
	I1213 19:44:40.424677       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 19:44:40.424715       1 main.go:301] handling current node
	
	
	==> kube-apiserver [68c32cc7d3c1f50302bec49c92368e3854e8146147b27d256d8c15e40407d1b2] <==
	W1213 19:44:44.597393       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.597489       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.597611       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.597711       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.597819       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.597924       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598053       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598181       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598293       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598397       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598500       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598659       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598780       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598998       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599102       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599202       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599293       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599445       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599529       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599614       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599688       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599750       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599880       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599950       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.600005       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [dcfd38d527b6be2d39e6ea9800a55589660af1cc8f83143bc6a628b2a6cddcd8] <==
	I1213 19:44:57.952053       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 19:44:57.956088       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 19:44:57.957025       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 19:44:57.973972       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1213 19:44:57.974153       1 policy_source.go:240] refreshing policies
	I1213 19:44:57.993082       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 19:44:58.010120       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 19:44:58.010363       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 19:44:58.011106       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 19:44:58.011186       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 19:44:58.010257       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 19:44:58.011409       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 19:44:58.011699       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 19:44:58.011800       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 19:44:58.030797       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 19:44:58.040371       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 19:44:58.063063       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 19:44:58.082348       1 cache.go:39] Caches are synced for autoregister controller
	E1213 19:44:58.123605       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 19:44:58.715229       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 19:44:59.931645       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 19:45:01.363790       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 19:45:01.610013       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 19:45:01.659945       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 19:45:01.711972       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [8bdee30f7b308a0339fbe56206ca3d6a98e2801a472b45dc376f396eb6767b8b] <==
	I1213 19:43:57.376511       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 19:43:57.376521       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 19:43:57.376530       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 19:43:57.376538       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 19:43:57.376238       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 19:43:57.383277       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 19:43:57.383351       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 19:43:57.383366       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 19:43:57.384331       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 19:43:57.391622       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 19:43:57.401001       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 19:43:57.410546       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 19:43:57.411756       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 19:43:57.411777       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 19:43:57.412927       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 19:43:57.412953       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 19:43:57.421185       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 19:43:57.423554       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 19:43:57.423578       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 19:43:57.423585       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 19:43:57.424483       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 19:43:57.425776       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 19:43:57.427543       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 19:43:57.428709       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 19:44:42.381936       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [cf30731ee12b967c35a6cb52d0e3eb3ae3960ec63dd7bb09a968da9f43eebffb] <==
	I1213 19:45:01.317950       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 19:45:01.319616       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 19:45:01.319734       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 19:45:01.325584       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 19:45:01.326830       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 19:45:01.326853       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 19:45:01.326861       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 19:45:01.333504       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 19:45:01.334668       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 19:45:01.345328       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 19:45:01.349884       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 19:45:01.351485       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 19:45:01.351534       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 19:45:01.353103       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 19:45:01.353340       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 19:45:01.353843       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 19:45:01.357065       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 19:45:01.359191       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 19:45:01.361107       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 19:45:01.364212       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 19:45:01.365430       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 19:45:01.367685       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 19:45:01.371028       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 19:45:01.374551       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 19:45:01.385927       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [1a40f83317314c07f555fa39401a8be922e3f11b98c3806ff21541a00bbf5124] <==
	I1213 19:44:56.425135       1 server_linux.go:53] "Using iptables proxy"
	I1213 19:44:57.389129       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 19:44:58.189827       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 19:44:58.189895       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 19:44:58.190004       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:44:58.229686       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 19:44:58.229814       1 server_linux.go:132] "Using iptables Proxier"
	I1213 19:44:58.242182       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:44:58.242479       1 server.go:527] "Version info" version="v1.34.2"
	I1213 19:44:58.242551       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:44:58.256947       1 config.go:106] "Starting endpoint slice config controller"
	I1213 19:44:58.256974       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 19:44:58.257334       1 config.go:200] "Starting service config controller"
	I1213 19:44:58.257355       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 19:44:58.257710       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 19:44:58.257726       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 19:44:58.258158       1 config.go:309] "Starting node config controller"
	I1213 19:44:58.258178       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 19:44:58.258185       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 19:44:58.357764       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 19:44:58.357832       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 19:44:58.357845       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [1ed1dff88cccfc264be42d8a89f25edba5cdd04758cb56c2f4f47d5db62de61c] <==
	I1213 19:44:00.530378       1 server_linux.go:53] "Using iptables proxy"
	I1213 19:44:00.634918       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 19:44:00.737066       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 19:44:00.737107       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 19:44:00.737192       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:44:00.755851       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 19:44:00.755913       1 server_linux.go:132] "Using iptables Proxier"
	I1213 19:44:00.759505       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:44:00.759847       1 server.go:527] "Version info" version="v1.34.2"
	I1213 19:44:00.759921       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:44:00.762786       1 config.go:106] "Starting endpoint slice config controller"
	I1213 19:44:00.762863       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 19:44:00.763183       1 config.go:200] "Starting service config controller"
	I1213 19:44:00.763226       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 19:44:00.763579       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 19:44:00.815125       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 19:44:00.815154       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 19:44:00.763973       1 config.go:309] "Starting node config controller"
	I1213 19:44:00.815182       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 19:44:00.815187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 19:44:00.863425       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 19:44:00.863457       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [76981bd3c6c8f820b72bd027ca5829b5098f61b47ed859bd1e2fd64fa786a137] <==
	E1213 19:43:50.415628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 19:43:50.415802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 19:43:50.417468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 19:43:50.417607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 19:43:50.417675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 19:43:50.417729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 19:43:50.417768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 19:43:50.417812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 19:43:50.417850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 19:43:50.417936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 19:43:50.420406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:43:51.242877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 19:43:51.268319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 19:43:51.317637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 19:43:51.336860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 19:43:51.370508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 19:43:51.452930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 19:43:51.459317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1213 19:43:53.519484       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 19:44:44.568365       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 19:44:44.568388       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 19:44:44.568410       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 19:44:44.568432       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 19:44:44.568643       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 19:44:44.568658       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8f5ba1ee2810a03a1e4142f99dbd279938b0c93175c0f6e1e7cea4d27503ead4] <==
	I1213 19:44:54.974367       1 serving.go:386] Generated self-signed cert in-memory
	W1213 19:44:57.893929       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 19:44:57.894034       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 19:44:57.894069       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 19:44:57.894122       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 19:44:58.048753       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 19:44:58.048793       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:44:58.056298       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 19:44:58.057465       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 19:44:58.057123       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 19:44:58.057150       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 19:44:58.159385       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.320928    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="71f33a69a01b767ca6767be4048d30ea" pod="kube-system/etcd-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.321153    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd656f340533802e82e2ce167ea59578" pod="kube-system/kube-apiserver-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.321353    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d3e30aaf91ff4055a3b9e68a0817287a" pod="kube-system/kube-controller-manager-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.321601    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-rrvxm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="404728d3-6e60-4ffd-8fde-d04cd97b1d71" pod="kube-system/kindnet-rrvxm"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: I1213 19:44:53.323346    1319 scope.go:117] "RemoveContainer" containerID="1ed1dff88cccfc264be42d8a89f25edba5cdd04758cb56c2f4f47d5db62de61c"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.323788    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd656f340533802e82e2ce167ea59578" pod="kube-system/kube-apiserver-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.323962    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d3e30aaf91ff4055a3b9e68a0817287a" pod="kube-system/kube-controller-manager-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.324127    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wm755\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="448b8703-4d1c-436d-8066-34855c077030" pod="kube-system/kube-proxy-wm755"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.325376    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-rrvxm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="404728d3-6e60-4ffd-8fde-d04cd97b1d71" pod="kube-system/kindnet-rrvxm"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.325648    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ac4c33e3f03b6273c35146d2e13008e5" pod="kube-system/kube-scheduler-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.327229    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="71f33a69a01b767ca6767be4048d30ea" pod="kube-system/etcd-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: I1213 19:44:53.330834    1319 scope.go:117] "RemoveContainer" containerID="7220c98a72257ad1aafe49b3bb8b08900afa0ea714b2d4d6646ef31da20fa812"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.331830    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d3e30aaf91ff4055a3b9e68a0817287a" pod="kube-system/kube-controller-manager-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.332271    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wm755\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="448b8703-4d1c-436d-8066-34855c077030" pod="kube-system/kube-proxy-wm755"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.332621    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-rrvxm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="404728d3-6e60-4ffd-8fde-d04cd97b1d71" pod="kube-system/kindnet-rrvxm"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.332880    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-n9958\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="22e2b1b5-7a27-4ca8-89e0-cce8c2000a1d" pod="kube-system/coredns-66bc5c9577-n9958"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.333391    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ac4c33e3f03b6273c35146d2e13008e5" pod="kube-system/kube-scheduler-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.333622    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="71f33a69a01b767ca6767be4048d30ea" pod="kube-system/etcd-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.333836    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd656f340533802e82e2ce167ea59578" pod="kube-system/kube-apiserver-pause-327125"
	Dec 13 19:44:57 pause-327125 kubelet[1319]: E1213 19:44:57.971909    1319 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-327125\" is forbidden: User \"system:node:pause-327125\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-327125' and this object" podUID="ac4c33e3f03b6273c35146d2e13008e5" pod="kube-system/kube-scheduler-pause-327125"
	Dec 13 19:44:57 pause-327125 kubelet[1319]: E1213 19:44:57.972820    1319 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-327125\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-327125' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 13 19:45:03 pause-327125 kubelet[1319]: W1213 19:45:03.248589    1319 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 13 19:45:13 pause-327125 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 19:45:13 pause-327125 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 19:45:13 pause-327125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-327125 -n pause-327125
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-327125 -n pause-327125: exit status 2 (377.413527ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-327125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-327125
helpers_test.go:244: (dbg) docker inspect pause-327125:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181",
	        "Created": "2025-12-13T19:43:26.814661435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 220175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T19:43:26.893266577Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181/hostname",
	        "HostsPath": "/var/lib/docker/containers/df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181/hosts",
	        "LogPath": "/var/lib/docker/containers/df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181/df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181-json.log",
	        "Name": "/pause-327125",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-327125:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-327125",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "df299981377d94cc033c7b39c26e6775862cd9897688cd80c5f78b936632f181",
	                "LowerDir": "/var/lib/docker/overlay2/926ec214a9aead4df2c0cd0cdb4af9c4a51e20d3f781947ec935a936412113c3-init/diff:/var/lib/docker/overlay2/4cda671c3c20fb572bbb254b6cb2d66de67b46788c2aa883ec19024f1ff16f23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/926ec214a9aead4df2c0cd0cdb4af9c4a51e20d3f781947ec935a936412113c3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/926ec214a9aead4df2c0cd0cdb4af9c4a51e20d3f781947ec935a936412113c3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/926ec214a9aead4df2c0cd0cdb4af9c4a51e20d3f781947ec935a936412113c3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-327125",
	                "Source": "/var/lib/docker/volumes/pause-327125/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-327125",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-327125",
	                "name.minikube.sigs.k8s.io": "pause-327125",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b7bc60ac907d9319abcb07fc89111bc1bbaa28370d282bf032c477efe14ec24",
	            "SandboxKey": "/var/run/docker/netns/8b7bc60ac907",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-327125": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:0c:07:f3:01:18",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c7be3b78199daca1c75d376ac38565212667e25c95a34efc8475f8ae1f2894dc",
	                    "EndpointID": "0f4a96fede720f51c2687113de577845dcc4a5c2ed7aceca185bf76727dd2e88",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-327125",
	                        "df299981377d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-327125 -n pause-327125
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-327125 -n pause-327125: exit status 2 (369.338106ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-327125 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-327125 logs -n 25: (1.386126611s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-255151 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:35 UTC │ 13 Dec 25 19:35 UTC │
	│ start   │ -p missing-upgrade-208144 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-208144    │ jenkins │ v1.35.0 │ 13 Dec 25 19:35 UTC │ 13 Dec 25 19:36 UTC │
	│ start   │ -p NoKubernetes-255151 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:36 UTC │
	│ delete  │ -p NoKubernetes-255151                                                                                                                          │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:36 UTC │
	│ start   │ -p NoKubernetes-255151 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:36 UTC │
	│ ssh     │ -p NoKubernetes-255151 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │                     │
	│ start   │ -p missing-upgrade-208144 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-208144    │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:37 UTC │
	│ stop    │ -p NoKubernetes-255151                                                                                                                          │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:36 UTC │
	│ start   │ -p NoKubernetes-255151 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:36 UTC │
	│ ssh     │ -p NoKubernetes-255151 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │                     │
	│ delete  │ -p NoKubernetes-255151                                                                                                                          │ NoKubernetes-255151       │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:36 UTC │
	│ start   │ -p kubernetes-upgrade-203932 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-203932 │ jenkins │ v1.37.0 │ 13 Dec 25 19:36 UTC │ 13 Dec 25 19:37 UTC │
	│ delete  │ -p missing-upgrade-208144                                                                                                                       │ missing-upgrade-208144    │ jenkins │ v1.37.0 │ 13 Dec 25 19:37 UTC │ 13 Dec 25 19:37 UTC │
	│ stop    │ -p kubernetes-upgrade-203932                                                                                                                    │ kubernetes-upgrade-203932 │ jenkins │ v1.37.0 │ 13 Dec 25 19:37 UTC │ 13 Dec 25 19:37 UTC │
	│ start   │ -p kubernetes-upgrade-203932 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-203932 │ jenkins │ v1.37.0 │ 13 Dec 25 19:37 UTC │                     │
	│ start   │ -p stopped-upgrade-825838 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-825838    │ jenkins │ v1.35.0 │ 13 Dec 25 19:37 UTC │ 13 Dec 25 19:37 UTC │
	│ stop    │ stopped-upgrade-825838 stop                                                                                                                     │ stopped-upgrade-825838    │ jenkins │ v1.35.0 │ 13 Dec 25 19:37 UTC │ 13 Dec 25 19:37 UTC │
	│ start   │ -p stopped-upgrade-825838 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-825838    │ jenkins │ v1.37.0 │ 13 Dec 25 19:37 UTC │ 13 Dec 25 19:42 UTC │
	│ delete  │ -p stopped-upgrade-825838                                                                                                                       │ stopped-upgrade-825838    │ jenkins │ v1.37.0 │ 13 Dec 25 19:42 UTC │ 13 Dec 25 19:42 UTC │
	│ start   │ -p running-upgrade-947759 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-947759    │ jenkins │ v1.35.0 │ 13 Dec 25 19:42 UTC │ 13 Dec 25 19:42 UTC │
	│ start   │ -p running-upgrade-947759 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-947759    │ jenkins │ v1.37.0 │ 13 Dec 25 19:42 UTC │ 13 Dec 25 19:43 UTC │
	│ delete  │ -p running-upgrade-947759                                                                                                                       │ running-upgrade-947759    │ jenkins │ v1.37.0 │ 13 Dec 25 19:43 UTC │ 13 Dec 25 19:43 UTC │
	│ start   │ -p pause-327125 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-327125              │ jenkins │ v1.37.0 │ 13 Dec 25 19:43 UTC │ 13 Dec 25 19:44 UTC │
	│ start   │ -p pause-327125 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-327125              │ jenkins │ v1.37.0 │ 13 Dec 25 19:44 UTC │ 13 Dec 25 19:45 UTC │
	│ pause   │ -p pause-327125 --alsologtostderr -v=5                                                                                                          │ pause-327125              │ jenkins │ v1.37.0 │ 13 Dec 25 19:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 19:44:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:44:43.254898  222758 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:44:43.255014  222758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:44:43.255023  222758 out.go:374] Setting ErrFile to fd 2...
	I1213 19:44:43.255027  222758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:44:43.255307  222758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:44:43.255654  222758 out.go:368] Setting JSON to false
	I1213 19:44:43.256577  222758 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8836,"bootTime":1765646248,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 19:44:43.256645  222758 start.go:143] virtualization:  
	I1213 19:44:43.259679  222758 out.go:179] * [pause-327125] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 19:44:43.263586  222758 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 19:44:43.263697  222758 notify.go:221] Checking for updates...
	I1213 19:44:43.271243  222758 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:44:43.274276  222758 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:44:43.277105  222758 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 19:44:43.280003  222758 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 19:44:43.283045  222758 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:44:43.286313  222758 config.go:182] Loaded profile config "pause-327125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:44:43.286875  222758 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 19:44:43.315382  222758 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 19:44:43.315508  222758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:44:43.372337  222758 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 19:44:43.362785272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:44:43.372455  222758 docker.go:319] overlay module found
	I1213 19:44:43.375630  222758 out.go:179] * Using the docker driver based on existing profile
	I1213 19:44:43.378523  222758 start.go:309] selected driver: docker
	I1213 19:44:43.378543  222758 start.go:927] validating driver "docker" against &{Name:pause-327125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-327125 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:44:43.378675  222758 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:44:43.378796  222758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:44:43.433118  222758 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 19:44:43.423005553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:44:43.433660  222758 cni.go:84] Creating CNI manager for ""
	I1213 19:44:43.433730  222758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:44:43.433784  222758 start.go:353] cluster config:
	{Name:pause-327125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-327125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:44:43.438798  222758 out.go:179] * Starting "pause-327125" primary control-plane node in "pause-327125" cluster
	I1213 19:44:43.441680  222758 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 19:44:43.444711  222758 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 19:44:43.447581  222758 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:44:43.447629  222758 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 19:44:43.447639  222758 cache.go:65] Caching tarball of preloaded images
	I1213 19:44:43.447666  222758 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 19:44:43.447724  222758 preload.go:238] Found /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:44:43.447734  222758 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 19:44:43.447879  222758 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/config.json ...
	I1213 19:44:43.466955  222758 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 19:44:43.466977  222758 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 19:44:43.466997  222758 cache.go:243] Successfully downloaded all kic artifacts
	I1213 19:44:43.467025  222758 start.go:360] acquireMachinesLock for pause-327125: {Name:mka7d8a1169e3d541c2f31839cc969c6ea065386 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:44:43.467093  222758 start.go:364] duration metric: took 41.28µs to acquireMachinesLock for "pause-327125"
	I1213 19:44:43.467117  222758 start.go:96] Skipping create...Using existing machine configuration
	I1213 19:44:43.467129  222758 fix.go:54] fixHost starting: 
	I1213 19:44:43.467392  222758 cli_runner.go:164] Run: docker container inspect pause-327125 --format={{.State.Status}}
	I1213 19:44:43.483933  222758 fix.go:112] recreateIfNeeded on pause-327125: state=Running err=<nil>
	W1213 19:44:43.483969  222758 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 19:44:43.487108  222758 out.go:252] * Updating the running docker "pause-327125" container ...
	I1213 19:44:43.487149  222758 machine.go:94] provisionDockerMachine start ...
	I1213 19:44:43.487229  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:43.506354  222758 main.go:143] libmachine: Using SSH client type: native
	I1213 19:44:43.506679  222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1213 19:44:43.506695  222758 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 19:44:43.656851  222758 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-327125
	
	I1213 19:44:43.656879  222758 ubuntu.go:182] provisioning hostname "pause-327125"
	I1213 19:44:43.656941  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:43.675831  222758 main.go:143] libmachine: Using SSH client type: native
	I1213 19:44:43.676144  222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1213 19:44:43.676155  222758 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-327125 && echo "pause-327125" | sudo tee /etc/hostname
	I1213 19:44:43.845971  222758 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-327125
	
	I1213 19:44:43.846069  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:43.871939  222758 main.go:143] libmachine: Using SSH client type: native
	I1213 19:44:43.872256  222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1213 19:44:43.872278  222758 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-327125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-327125/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-327125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:44:44.041437  222758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:44:44.041470  222758 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-2686/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-2686/.minikube}
	I1213 19:44:44.041503  222758 ubuntu.go:190] setting up certificates
	I1213 19:44:44.041512  222758 provision.go:84] configureAuth start
	I1213 19:44:44.041587  222758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-327125
	I1213 19:44:44.060050  222758 provision.go:143] copyHostCerts
	I1213 19:44:44.060122  222758 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem, removing ...
	I1213 19:44:44.060131  222758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem
	I1213 19:44:44.060208  222758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/ca.pem (1082 bytes)
	I1213 19:44:44.060312  222758 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem, removing ...
	I1213 19:44:44.060317  222758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem
	I1213 19:44:44.060345  222758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/cert.pem (1123 bytes)
	I1213 19:44:44.060410  222758 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem, removing ...
	I1213 19:44:44.060414  222758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem
	I1213 19:44:44.060438  222758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-2686/.minikube/key.pem (1675 bytes)
	I1213 19:44:44.060491  222758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem org=jenkins.pause-327125 san=[127.0.0.1 192.168.85.2 localhost minikube pause-327125]
	I1213 19:44:44.195665  222758 provision.go:177] copyRemoteCerts
	I1213 19:44:44.195742  222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:44:44.195778  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:44.213677  222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/pause-327125/id_rsa Username:docker}
	I1213 19:44:44.321575  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 19:44:44.339711  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 19:44:44.360780  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:44:44.379613  222758 provision.go:87] duration metric: took 338.079115ms to configureAuth
	I1213 19:44:44.379640  222758 ubuntu.go:206] setting minikube options for container-runtime
	I1213 19:44:44.379861  222758 config.go:182] Loaded profile config "pause-327125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:44:44.379953  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:44.398782  222758 main.go:143] libmachine: Using SSH client type: native
	I1213 19:44:44.399106  222758 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33023 <nil> <nil>}
	I1213 19:44:44.399120  222758 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:44:49.770115  222758 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:44:49.770148  222758 machine.go:97] duration metric: took 6.282990346s to provisionDockerMachine
	I1213 19:44:49.770160  222758 start.go:293] postStartSetup for "pause-327125" (driver="docker")
	I1213 19:44:49.770171  222758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:44:49.770244  222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:44:49.770291  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:49.790601  222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/pause-327125/id_rsa Username:docker}
	I1213 19:44:49.897250  222758 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:44:49.900737  222758 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:44:49.900773  222758 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 19:44:49.900785  222758 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/addons for local assets ...
	I1213 19:44:49.900837  222758 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-2686/.minikube/files for local assets ...
	I1213 19:44:49.900920  222758 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem -> 46372.pem in /etc/ssl/certs
	I1213 19:44:49.901148  222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 19:44:49.909135  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:44:49.927212  222758 start.go:296] duration metric: took 157.036913ms for postStartSetup
	I1213 19:44:49.927291  222758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:44:49.927349  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:49.944931  222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/pause-327125/id_rsa Username:docker}
	I1213 19:44:50.058747  222758 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:44:50.064244  222758 fix.go:56] duration metric: took 6.597108627s for fixHost
	I1213 19:44:50.064285  222758 start.go:83] releasing machines lock for "pause-327125", held for 6.597166671s
	I1213 19:44:50.064356  222758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-327125
	I1213 19:44:50.081775  222758 ssh_runner.go:195] Run: cat /version.json
	I1213 19:44:50.081845  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:50.081848  222758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:44:50.081920  222758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-327125
	I1213 19:44:50.103964  222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/pause-327125/id_rsa Username:docker}
	I1213 19:44:50.103964  222758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/pause-327125/id_rsa Username:docker}
	I1213 19:44:50.209564  222758 ssh_runner.go:195] Run: systemctl --version
	I1213 19:44:50.297918  222758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:44:50.338112  222758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 19:44:50.342517  222758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 19:44:50.342591  222758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:44:50.351208  222758 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 19:44:50.351235  222758 start.go:496] detecting cgroup driver to use...
	I1213 19:44:50.351265  222758 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:44:50.351319  222758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:44:50.366166  222758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:44:50.379395  222758 docker.go:218] disabling cri-docker service (if available) ...
	I1213 19:44:50.379462  222758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:44:50.394972  222758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:44:50.407809  222758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:44:50.544599  222758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:44:50.705991  222758 docker.go:234] disabling docker service ...
	I1213 19:44:50.706058  222758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:44:50.721001  222758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:44:50.734274  222758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:44:50.866646  222758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:44:51.007402  222758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:44:51.022578  222758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:44:51.036927  222758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 19:44:51.037084  222758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.047017  222758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:44:51.047088  222758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.057189  222758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.066535  222758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.076017  222758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:44:51.084520  222758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.094261  222758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.103462  222758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:44:51.112885  222758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:44:51.120922  222758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:44:51.128996  222758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:44:51.256437  222758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:44:51.481764  222758 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:44:51.481868  222758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:44:51.485841  222758 start.go:564] Will wait 60s for crictl version
	I1213 19:44:51.485905  222758 ssh_runner.go:195] Run: which crictl
	I1213 19:44:51.489413  222758 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 19:44:51.513454  222758 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 19:44:51.513565  222758 ssh_runner.go:195] Run: crio --version
	I1213 19:44:51.541229  222758 ssh_runner.go:195] Run: crio --version
	I1213 19:44:51.571059  222758 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 19:44:51.574084  222758 cli_runner.go:164] Run: docker network inspect pause-327125 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:44:51.590544  222758 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 19:44:51.594446  222758 kubeadm.go:884] updating cluster {Name:pause-327125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-327125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:44:51.594590  222758 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 19:44:51.594654  222758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:44:51.626769  222758 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:44:51.626796  222758 crio.go:433] Images already preloaded, skipping extraction
	I1213 19:44:51.626851  222758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:44:51.651922  222758 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:44:51.651947  222758 cache_images.go:86] Images are preloaded, skipping loading
	I1213 19:44:51.651955  222758 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1213 19:44:51.652062  222758 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-327125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-327125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:44:51.652137  222758 ssh_runner.go:195] Run: crio config
	I1213 19:44:51.721171  222758 cni.go:84] Creating CNI manager for ""
	I1213 19:44:51.721242  222758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:44:51.721273  222758 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 19:44:51.721325  222758 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-327125 NodeName:pause-327125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:44:51.721512  222758 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-327125"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:44:51.721629  222758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 19:44:51.729218  222758 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 19:44:51.729294  222758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 19:44:51.736627  222758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1213 19:44:51.749223  222758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:44:51.761927  222758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1213 19:44:51.774534  222758 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 19:44:51.778297  222758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:44:51.911710  222758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:44:51.925597  222758 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125 for IP: 192.168.85.2
	I1213 19:44:51.925616  222758 certs.go:195] generating shared ca certs ...
	I1213 19:44:51.925631  222758 certs.go:227] acquiring lock for ca certs: {Name:mkf9b87b9a1a82bdfcae8a54365751154f6d858f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:44:51.925752  222758 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key
	I1213 19:44:51.925794  222758 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key
	I1213 19:44:51.925801  222758 certs.go:257] generating profile certs ...
	I1213 19:44:51.925885  222758 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/client.key
	I1213 19:44:51.925957  222758 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/apiserver.key.0d7ed32c
	I1213 19:44:51.925997  222758 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/proxy-client.key
	I1213 19:44:51.926104  222758 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem (1338 bytes)
	W1213 19:44:51.926156  222758 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637_empty.pem, impossibly tiny 0 bytes
	I1213 19:44:51.926165  222758 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:44:51.926191  222758 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:44:51.926216  222758 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:44:51.926238  222758 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/certs/key.pem (1675 bytes)
	I1213 19:44:51.926282  222758 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem (1708 bytes)
	I1213 19:44:51.926874  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:44:51.945113  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1213 19:44:51.963664  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:44:51.986349  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:44:52.006266  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 19:44:52.024906  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 19:44:52.043591  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:44:52.062159  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 19:44:52.085594  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/certs/4637.pem --> /usr/share/ca-certificates/4637.pem (1338 bytes)
	I1213 19:44:52.106848  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/ssl/certs/46372.pem --> /usr/share/ca-certificates/46372.pem (1708 bytes)
	I1213 19:44:52.127642  222758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:44:52.149968  222758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:44:52.162941  222758 ssh_runner.go:195] Run: openssl version
	I1213 19:44:52.169638  222758 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/46372.pem
	I1213 19:44:52.176903  222758 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/46372.pem /etc/ssl/certs/46372.pem
	I1213 19:44:52.184542  222758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46372.pem
	I1213 19:44:52.188335  222758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 18:27 /usr/share/ca-certificates/46372.pem
	I1213 19:44:52.188403  222758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46372.pem
	I1213 19:44:52.229460  222758 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 19:44:52.236848  222758 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:44:52.243954  222758 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 19:44:52.251477  222758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:44:52.255401  222758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:44:52.255490  222758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:44:52.296614  222758 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 19:44:52.304269  222758 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4637.pem
	I1213 19:44:52.311847  222758 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4637.pem /etc/ssl/certs/4637.pem
	I1213 19:44:52.319655  222758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4637.pem
	I1213 19:44:52.323605  222758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 18:27 /usr/share/ca-certificates/4637.pem
	I1213 19:44:52.323672  222758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4637.pem
	I1213 19:44:52.369297  222758 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 19:44:52.376621  222758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:44:52.380334  222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 19:44:52.420950  222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 19:44:52.461622  222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 19:44:52.502658  222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 19:44:52.543528  222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 19:44:52.584342  222758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 19:44:52.625226  222758 kubeadm.go:401] StartCluster: {Name:pause-327125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-327125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:44:52.625341  222758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:44:52.625406  222758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:44:52.653541  222758 cri.go:89] found id: "7220c98a72257ad1aafe49b3bb8b08900afa0ea714b2d4d6646ef31da20fa812"
	I1213 19:44:52.653563  222758 cri.go:89] found id: "1ed1dff88cccfc264be42d8a89f25edba5cdd04758cb56c2f4f47d5db62de61c"
	I1213 19:44:52.653568  222758 cri.go:89] found id: "75d5802e02f12610e502e79be2dfa4c49a2d962ac8ba1a7e6706a97f9dcc1ae1"
	I1213 19:44:52.653572  222758 cri.go:89] found id: "aaf565413e1949b50c1ec1ad4e41419d439117e48ca481c22c331764e7731b89"
	I1213 19:44:52.653576  222758 cri.go:89] found id: "76981bd3c6c8f820b72bd027ca5829b5098f61b47ed859bd1e2fd64fa786a137"
	I1213 19:44:52.653579  222758 cri.go:89] found id: "68c32cc7d3c1f50302bec49c92368e3854e8146147b27d256d8c15e40407d1b2"
	I1213 19:44:52.653582  222758 cri.go:89] found id: "8bdee30f7b308a0339fbe56206ca3d6a98e2801a472b45dc376f396eb6767b8b"
	I1213 19:44:52.653586  222758 cri.go:89] found id: ""
	I1213 19:44:52.653661  222758 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 19:44:52.664577  222758 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T19:44:52Z" level=error msg="open /run/runc: no such file or directory"
	I1213 19:44:52.664675  222758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:44:52.672683  222758 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 19:44:52.672754  222758 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 19:44:52.672843  222758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 19:44:52.680249  222758 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 19:44:52.680939  222758 kubeconfig.go:125] found "pause-327125" server: "https://192.168.85.2:8443"
	I1213 19:44:52.681722  222758 kapi.go:59] client config for pause-327125: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 19:44:52.682227  222758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 19:44:52.682251  222758 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 19:44:52.682258  222758 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 19:44:52.682268  222758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 19:44:52.682272  222758 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 19:44:52.682530  222758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 19:44:52.689970  222758 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 19:44:52.690004  222758 kubeadm.go:602] duration metric: took 17.230445ms to restartPrimaryControlPlane
	I1213 19:44:52.690015  222758 kubeadm.go:403] duration metric: took 64.796977ms to StartCluster
	I1213 19:44:52.690059  222758 settings.go:142] acquiring lock: {Name:mkabef07beee93a0619ef6b8f854900ab9ed0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:44:52.690157  222758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 19:44:52.691011  222758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/kubeconfig: {Name:mkd364151ba0e08b56cf2ae826abbcc274faaabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:44:52.691257  222758 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:44:52.691533  222758 config.go:182] Loaded profile config "pause-327125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:44:52.691602  222758 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 19:44:52.695701  222758 out.go:179] * Verifying Kubernetes components...
	I1213 19:44:52.695706  222758 out.go:179] * Enabled addons: 
	I1213 19:44:52.698645  222758 addons.go:530] duration metric: took 7.039322ms for enable addons: enabled=[]
	I1213 19:44:52.698752  222758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:44:52.828837  222758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:44:52.843802  222758 node_ready.go:35] waiting up to 6m0s for node "pause-327125" to be "Ready" ...
	I1213 19:44:57.987257  222758 node_ready.go:49] node "pause-327125" is "Ready"
	I1213 19:44:57.987284  222758 node_ready.go:38] duration metric: took 5.143445481s for node "pause-327125" to be "Ready" ...
	I1213 19:44:57.987297  222758 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:44:57.987358  222758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:44:58.001627  222758 api_server.go:72] duration metric: took 5.310333758s to wait for apiserver process to appear ...
	I1213 19:44:58.001651  222758 api_server.go:88] waiting for apiserver healthz status ...
	I1213 19:44:58.001674  222758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 19:44:58.116623  222758 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 19:44:58.116713  222758 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 19:44:58.502527  222758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 19:44:58.511043  222758 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 19:44:58.511079  222758 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 19:44:59.002771  222758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 19:44:59.010901  222758 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 19:44:59.011943  222758 api_server.go:141] control plane version: v1.34.2
	I1213 19:44:59.011968  222758 api_server.go:131] duration metric: took 1.010309059s to wait for apiserver health ...
	I1213 19:44:59.011978  222758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 19:44:59.015965  222758 system_pods.go:59] 7 kube-system pods found
	I1213 19:44:59.016040  222758 system_pods.go:61] "coredns-66bc5c9577-n9958" [22e2b1b5-7a27-4ca8-89e0-cce8c2000a1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 19:44:59.016063  222758 system_pods.go:61] "etcd-pause-327125" [edef5c87-0b37-4ffb-9d94-2cd8e868dd10] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 19:44:59.016070  222758 system_pods.go:61] "kindnet-rrvxm" [404728d3-6e60-4ffd-8fde-d04cd97b1d71] Running
	I1213 19:44:59.016090  222758 system_pods.go:61] "kube-apiserver-pause-327125" [308c3248-32c3-4c14-96f0-ea35adf20b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 19:44:59.016098  222758 system_pods.go:61] "kube-controller-manager-pause-327125" [3bab01e9-17c7-4f72-ae08-e99d2710dc00] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 19:44:59.016112  222758 system_pods.go:61] "kube-proxy-wm755" [448b8703-4d1c-436d-8066-34855c077030] Running
	I1213 19:44:59.016122  222758 system_pods.go:61] "kube-scheduler-pause-327125" [9154159c-a53b-4594-a0ab-2084e70b508d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 19:44:59.016129  222758 system_pods.go:74] duration metric: took 4.143742ms to wait for pod list to return data ...
	I1213 19:44:59.016140  222758 default_sa.go:34] waiting for default service account to be created ...
	I1213 19:44:59.021743  222758 default_sa.go:45] found service account: "default"
	I1213 19:44:59.021772  222758 default_sa.go:55] duration metric: took 5.626095ms for default service account to be created ...
	I1213 19:44:59.021783  222758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 19:44:59.025538  222758 system_pods.go:86] 7 kube-system pods found
	I1213 19:44:59.025575  222758 system_pods.go:89] "coredns-66bc5c9577-n9958" [22e2b1b5-7a27-4ca8-89e0-cce8c2000a1d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 19:44:59.025585  222758 system_pods.go:89] "etcd-pause-327125" [edef5c87-0b37-4ffb-9d94-2cd8e868dd10] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 19:44:59.025590  222758 system_pods.go:89] "kindnet-rrvxm" [404728d3-6e60-4ffd-8fde-d04cd97b1d71] Running
	I1213 19:44:59.025596  222758 system_pods.go:89] "kube-apiserver-pause-327125" [308c3248-32c3-4c14-96f0-ea35adf20b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 19:44:59.025604  222758 system_pods.go:89] "kube-controller-manager-pause-327125" [3bab01e9-17c7-4f72-ae08-e99d2710dc00] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 19:44:59.025608  222758 system_pods.go:89] "kube-proxy-wm755" [448b8703-4d1c-436d-8066-34855c077030] Running
	I1213 19:44:59.025614  222758 system_pods.go:89] "kube-scheduler-pause-327125" [9154159c-a53b-4594-a0ab-2084e70b508d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 19:44:59.025622  222758 system_pods.go:126] duration metric: took 3.832848ms to wait for k8s-apps to be running ...
	I1213 19:44:59.025630  222758 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 19:44:59.025687  222758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:44:59.039180  222758 system_svc.go:56] duration metric: took 13.538759ms WaitForService to wait for kubelet
	I1213 19:44:59.039208  222758 kubeadm.go:587] duration metric: took 6.347918333s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:44:59.039228  222758 node_conditions.go:102] verifying NodePressure condition ...
	I1213 19:44:59.042507  222758 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 19:44:59.042542  222758 node_conditions.go:123] node cpu capacity is 2
	I1213 19:44:59.042556  222758 node_conditions.go:105] duration metric: took 3.324086ms to run NodePressure ...
	I1213 19:44:59.042569  222758 start.go:242] waiting for startup goroutines ...
	I1213 19:44:59.042576  222758 start.go:247] waiting for cluster config update ...
	I1213 19:44:59.042584  222758 start.go:256] writing updated cluster config ...
	I1213 19:44:59.042902  222758 ssh_runner.go:195] Run: rm -f paused
	I1213 19:44:59.046625  222758 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 19:44:59.047255  222758 kapi.go:59] client config for pause-327125: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/client.crt", KeyFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/profiles/pause-327125/client.key", CAFile:"/home/jenkins/minikube-integration/22122-2686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 19:44:59.115999  222758 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n9958" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 19:45:01.122664  222758 pod_ready.go:104] pod "coredns-66bc5c9577-n9958" is not "Ready", error: <nil>
	W1213 19:45:03.622466  222758 pod_ready.go:104] pod "coredns-66bc5c9577-n9958" is not "Ready", error: <nil>
	I1213 19:45:05.622742  222758 pod_ready.go:94] pod "coredns-66bc5c9577-n9958" is "Ready"
	I1213 19:45:05.622768  222758 pod_ready.go:86] duration metric: took 6.506738651s for pod "coredns-66bc5c9577-n9958" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:05.625727  222758 pod_ready.go:83] waiting for pod "etcd-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 19:45:07.631323  222758 pod_ready.go:104] pod "etcd-pause-327125" is not "Ready", error: <nil>
	I1213 19:45:08.132200  222758 pod_ready.go:94] pod "etcd-pause-327125" is "Ready"
	I1213 19:45:08.132271  222758 pod_ready.go:86] duration metric: took 2.506516054s for pod "etcd-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:08.134844  222758 pod_ready.go:83] waiting for pod "kube-apiserver-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 19:45:10.140924  222758 pod_ready.go:104] pod "kube-apiserver-pause-327125" is not "Ready", error: <nil>
	W1213 19:45:12.141189  222758 pod_ready.go:104] pod "kube-apiserver-pause-327125" is not "Ready", error: <nil>
	I1213 19:45:13.140442  222758 pod_ready.go:94] pod "kube-apiserver-pause-327125" is "Ready"
	I1213 19:45:13.140474  222758 pod_ready.go:86] duration metric: took 5.005603448s for pod "kube-apiserver-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.142841  222758 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.147624  222758 pod_ready.go:94] pod "kube-controller-manager-pause-327125" is "Ready"
	I1213 19:45:13.147655  222758 pod_ready.go:86] duration metric: took 4.78619ms for pod "kube-controller-manager-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.150817  222758 pod_ready.go:83] waiting for pod "kube-proxy-wm755" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.157333  222758 pod_ready.go:94] pod "kube-proxy-wm755" is "Ready"
	I1213 19:45:13.157363  222758 pod_ready.go:86] duration metric: took 6.51593ms for pod "kube-proxy-wm755" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.159947  222758 pod_ready.go:83] waiting for pod "kube-scheduler-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.339143  222758 pod_ready.go:94] pod "kube-scheduler-pause-327125" is "Ready"
	I1213 19:45:13.339171  222758 pod_ready.go:86] duration metric: took 179.200709ms for pod "kube-scheduler-pause-327125" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 19:45:13.339186  222758 pod_ready.go:40] duration metric: took 14.292528046s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 19:45:13.409482  222758 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 19:45:13.412468  222758 out.go:179] * Done! kubectl is now configured to use "pause-327125" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.402430628Z" level=info msg="Started container" PID=2367 containerID=dcfd38d527b6be2d39e6ea9800a55589660af1cc8f83143bc6a628b2a6cddcd8 description=kube-system/kube-apiserver-pause-327125/kube-apiserver id=44055caf-334e-4343-a100-9300ab4bafe0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a8ec81e982d49d4b18696c5338d90a51e7771563607dbf05511fa05876ab4ec8
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.415226672Z" level=info msg="Started container" PID=2375 containerID=1a40f83317314c07f555fa39401a8be922e3f11b98c3806ff21541a00bbf5124 description=kube-system/kube-proxy-wm755/kube-proxy id=22330665-5da2-4c48-8b82-e7c4d8c41f28 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dcf70b4b17a00869ec1ce46f2d13b8143a05d50e21eba3f19613a6fffbd71ca8
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.43956802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.452258316Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.474792927Z" level=info msg="Created container b90ed5b617a8e5e9b6b1c998531c7c69f3763c7208d2c12026c5e662fbea0428: kube-system/etcd-pause-327125/etcd" id=d9656fde-0d9e-4750-a1ef-84af9563ad9a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.485135641Z" level=info msg="Starting container: b90ed5b617a8e5e9b6b1c998531c7c69f3763c7208d2c12026c5e662fbea0428" id=96a64787-e990-4e7f-bb05-d1bd53af4cbc name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.489210583Z" level=info msg="Created container 55de777e8b51f0c3aa3fb1f964df14c259552a6dbc6091767e6e1ac531f820ce: kube-system/kindnet-rrvxm/kindnet-cni" id=9074a29b-ad0d-4365-a6d1-ccac31cb722b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.493627417Z" level=info msg="Started container" PID=2402 containerID=b90ed5b617a8e5e9b6b1c998531c7c69f3763c7208d2c12026c5e662fbea0428 description=kube-system/etcd-pause-327125/etcd id=96a64787-e990-4e7f-bb05-d1bd53af4cbc name=/runtime.v1.RuntimeService/StartContainer sandboxID=83fc490189c11dc033e72c21b3b1dba38236ce7a6ef466d1c470a6859b07a4cd
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.49815882Z" level=info msg="Starting container: 55de777e8b51f0c3aa3fb1f964df14c259552a6dbc6091767e6e1ac531f820ce" id=2daae56d-b5ff-4d91-8687-3a331d5502ed name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.508189894Z" level=info msg="Started container" PID=2395 containerID=55de777e8b51f0c3aa3fb1f964df14c259552a6dbc6091767e6e1ac531f820ce description=kube-system/kindnet-rrvxm/kindnet-cni id=2daae56d-b5ff-4d91-8687-3a331d5502ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=6bf9cbde47db43e67f53be9aae10efd0148660f01dc1d6d8000d48e6a2e98570
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.524215862Z" level=info msg="Created container 42beff3f4415684d2db4b4f6cd38d8017bd8b45ccc3e0fffa01fd65f6646bc7f: kube-system/coredns-66bc5c9577-n9958/coredns" id=b0b6acd4-db5e-47f8-a6e0-163a6d417a94 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.525131608Z" level=info msg="Starting container: 42beff3f4415684d2db4b4f6cd38d8017bd8b45ccc3e0fffa01fd65f6646bc7f" id=cde1be8e-9d51-470f-bbe2-296f0d91abbf name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:44:53 pause-327125 crio[2078]: time="2025-12-13T19:44:53.527663231Z" level=info msg="Started container" PID=2414 containerID=42beff3f4415684d2db4b4f6cd38d8017bd8b45ccc3e0fffa01fd65f6646bc7f description=kube-system/coredns-66bc5c9577-n9958/coredns id=cde1be8e-9d51-470f-bbe2-296f0d91abbf name=/runtime.v1.RuntimeService/StartContainer sandboxID=0dec3714476f5a6dc95d08d2264e0ae2eea3d92f2082d134b24bb0fb7fbb9e16
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.846294085Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.849893397Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.849930854Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.849953516Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.853282788Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.853318817Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.853341611Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.856666502Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.856702703Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.856727121Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.859805537Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 19:45:03 pause-327125 crio[2078]: time="2025-12-13T19:45:03.85983908Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	42beff3f44156       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   25 seconds ago       Running             coredns                   1                   0dec3714476f5       coredns-66bc5c9577-n9958               kube-system
	b90ed5b617a8e       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   25 seconds ago       Running             etcd                      1                   83fc490189c11       etcd-pause-327125                      kube-system
	55de777e8b51f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   25 seconds ago       Running             kindnet-cni               1                   6bf9cbde47db4       kindnet-rrvxm                          kube-system
	1a40f83317314       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   25 seconds ago       Running             kube-proxy                1                   dcf70b4b17a00       kube-proxy-wm755                       kube-system
	dcfd38d527b6b       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   25 seconds ago       Running             kube-apiserver            1                   a8ec81e982d49       kube-apiserver-pause-327125            kube-system
	8f5ba1ee2810a       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   25 seconds ago       Running             kube-scheduler            1                   a8001cbe04bae       kube-scheduler-pause-327125            kube-system
	cf30731ee12b9       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   25 seconds ago       Running             kube-controller-manager   1                   690c6479d674d       kube-controller-manager-pause-327125   kube-system
	7220c98a72257       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   0dec3714476f5       coredns-66bc5c9577-n9958               kube-system
	1ed1dff88cccf       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   dcf70b4b17a00       kube-proxy-wm755                       kube-system
	75d5802e02f12       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   6bf9cbde47db4       kindnet-rrvxm                          kube-system
	aaf565413e194       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   83fc490189c11       etcd-pause-327125                      kube-system
	76981bd3c6c8f       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   a8001cbe04bae       kube-scheduler-pause-327125            kube-system
	68c32cc7d3c1f       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   a8ec81e982d49       kube-apiserver-pause-327125            kube-system
	8bdee30f7b308       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   690c6479d674d       kube-controller-manager-pause-327125   kube-system
	
	
	==> coredns [42beff3f4415684d2db4b4f6cd38d8017bd8b45ccc3e0fffa01fd65f6646bc7f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55652 - 7874 "HINFO IN 6840119838698813041.8823242800484221961. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026057189s
	
	
	==> coredns [7220c98a72257ad1aafe49b3bb8b08900afa0ea714b2d4d6646ef31da20fa812] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50198 - 3219 "HINFO IN 3495353353782401444.6666573847651989814. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013182039s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-327125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-327125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
	                    minikube.k8s.io/name=pause-327125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T19_43_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 19:43:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-327125
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 19:45:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 19:44:40 +0000   Sat, 13 Dec 2025 19:43:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 19:44:40 +0000   Sat, 13 Dec 2025 19:43:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 19:44:40 +0000   Sat, 13 Dec 2025 19:43:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 19:44:40 +0000   Sat, 13 Dec 2025 19:44:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-327125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                2b77a055-1c8f-48a2-bab3-e094df3b8f45
	  Boot ID:                    76aeba50-958b-45ee-957d-e00cd07a99b2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-n9958                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     80s
	  kube-system                 etcd-pause-327125                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         85s
	  kube-system                 kindnet-rrvxm                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      80s
	  kube-system                 kube-apiserver-pause-327125             250m (12%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-pause-327125    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-wm755                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-327125             100m (5%)     0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 77s                kube-proxy       
	  Normal   Starting                 20s                kube-proxy       
	  Warning  CgroupV1                 93s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  93s (x9 over 93s)  kubelet          Node pause-327125 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    93s (x8 over 93s)  kubelet          Node pause-327125 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     93s (x7 over 93s)  kubelet          Node pause-327125 status is now: NodeHasSufficientPID
	  Normal   Starting                 86s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 86s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  85s                kubelet          Node pause-327125 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s                kubelet          Node pause-327125 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s                kubelet          Node pause-327125 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s                node-controller  Node pause-327125 event: Registered Node pause-327125 in Controller
	  Normal   NodeReady                38s                kubelet          Node pause-327125 status is now: NodeReady
	  Normal   RegisteredNode           17s                node-controller  Node pause-327125 event: Registered Node pause-327125 in Controller
	
	
	==> dmesg <==
	[Dec13 19:05] overlayfs: idmapped layers are currently not supported
	[  +4.041925] overlayfs: idmapped layers are currently not supported
	[ +36.958854] overlayfs: idmapped layers are currently not supported
	[Dec13 19:06] overlayfs: idmapped layers are currently not supported
	[Dec13 19:07] overlayfs: idmapped layers are currently not supported
	[  +4.088622] overlayfs: idmapped layers are currently not supported
	[Dec13 19:16] overlayfs: idmapped layers are currently not supported
	[Dec13 19:18] overlayfs: idmapped layers are currently not supported
	[Dec13 19:22] overlayfs: idmapped layers are currently not supported
	[Dec13 19:23] overlayfs: idmapped layers are currently not supported
	[Dec13 19:24] overlayfs: idmapped layers are currently not supported
	[Dec13 19:25] overlayfs: idmapped layers are currently not supported
	[Dec13 19:26] overlayfs: idmapped layers are currently not supported
	[Dec13 19:28] overlayfs: idmapped layers are currently not supported
	[ +16.353793] overlayfs: idmapped layers are currently not supported
	[ +17.019256] overlayfs: idmapped layers are currently not supported
	[Dec13 19:29] overlayfs: idmapped layers are currently not supported
	[Dec13 19:30] overlayfs: idmapped layers are currently not supported
	[ +42.207433] overlayfs: idmapped layers are currently not supported
	[Dec13 19:31] overlayfs: idmapped layers are currently not supported
	[Dec13 19:32] overlayfs: idmapped layers are currently not supported
	[Dec13 19:33] overlayfs: idmapped layers are currently not supported
	[Dec13 19:35] overlayfs: idmapped layers are currently not supported
	[Dec13 19:36] overlayfs: idmapped layers are currently not supported
	[Dec13 19:43] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [aaf565413e1949b50c1ec1ad4e41419d439117e48ca481c22c331764e7731b89] <==
	{"level":"warn","ts":"2025-12-13T19:43:49.010539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:43:49.026157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:43:49.050172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:43:49.079808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:43:49.115747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:43:49.129374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:43:49.225479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56222","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T19:44:44.567184Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T19:44:44.567254Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-327125","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-13T19:44:44.567369Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T19:44:44.855322Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T19:44:44.855401Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T19:44:44.855423Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-13T19:44:44.855474Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T19:44:44.855538Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T19:44:44.855572Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T19:44:44.855583Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T19:44:44.855603Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-13T19:44:44.855695Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T19:44:44.855740Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T19:44:44.855781Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T19:44:44.858885Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-13T19:44:44.858984Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T19:44:44.859058Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-13T19:44:44.859089Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-327125","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [b90ed5b617a8e5e9b6b1c998531c7c69f3763c7208d2c12026c5e662fbea0428] <==
	{"level":"warn","ts":"2025-12-13T19:44:56.498897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.514392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.533263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.551077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.568775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.603622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.647146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.681939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.696070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.726781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.732958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.756128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.777196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.801171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.813225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.846025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.871283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.881205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.900035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.916675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.951802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.973361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:56.996007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:57.034385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T19:44:57.118643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42444","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:45:18 up  2:27,  0 user,  load average: 2.08, 2.30, 2.07
	Linux pause-327125 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [55de777e8b51f0c3aa3fb1f964df14c259552a6dbc6091767e6e1ac531f820ce] <==
	I1213 19:44:53.631086       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 19:44:53.631583       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 19:44:53.631757       1 main.go:148] setting mtu 1500 for CNI 
	I1213 19:44:53.631803       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 19:44:53.631843       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T19:44:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 19:44:53.846238       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 19:44:53.846268       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 19:44:53.846277       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 19:44:53.846951       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 19:44:58.046417       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 19:44:58.046467       1 metrics.go:72] Registering metrics
	I1213 19:44:58.046538       1 controller.go:711] "Syncing nftables rules"
	I1213 19:45:03.845853       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 19:45:03.845939       1 main.go:301] handling current node
	I1213 19:45:13.846153       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 19:45:13.846222       1 main.go:301] handling current node
	
	
	==> kindnet [75d5802e02f12610e502e79be2dfa4c49a2d962ac8ba1a7e6706a97f9dcc1ae1] <==
	I1213 19:43:59.926189       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 19:43:59.926577       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 19:43:59.926717       1 main.go:148] setting mtu 1500 for CNI 
	I1213 19:43:59.926733       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 19:43:59.926747       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T19:44:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 19:44:00.421188       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 19:44:00.421219       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 19:44:00.421234       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 19:44:00.421691       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 19:44:30.420844       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 19:44:30.421878       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1213 19:44:30.421888       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1213 19:44:30.421996       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1213 19:44:31.822195       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 19:44:31.822279       1 metrics.go:72] Registering metrics
	I1213 19:44:31.822355       1 controller.go:711] "Syncing nftables rules"
	I1213 19:44:40.424677       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 19:44:40.424715       1 main.go:301] handling current node
	
	
	==> kube-apiserver [68c32cc7d3c1f50302bec49c92368e3854e8146147b27d256d8c15e40407d1b2] <==
	W1213 19:44:44.597393       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.597489       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.597611       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.597711       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.597819       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.597924       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598053       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598181       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598293       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598397       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598500       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598659       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598780       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.598998       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599102       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599202       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599293       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599445       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599529       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599614       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599688       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599750       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599880       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.599950       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 19:44:44.600005       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [dcfd38d527b6be2d39e6ea9800a55589660af1cc8f83143bc6a628b2a6cddcd8] <==
	I1213 19:44:57.952053       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 19:44:57.956088       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 19:44:57.957025       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 19:44:57.973972       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1213 19:44:57.974153       1 policy_source.go:240] refreshing policies
	I1213 19:44:57.993082       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 19:44:58.010120       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 19:44:58.010363       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 19:44:58.011106       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 19:44:58.011186       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 19:44:58.010257       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 19:44:58.011409       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 19:44:58.011699       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 19:44:58.011800       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 19:44:58.030797       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 19:44:58.040371       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 19:44:58.063063       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 19:44:58.082348       1 cache.go:39] Caches are synced for autoregister controller
	E1213 19:44:58.123605       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 19:44:58.715229       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 19:44:59.931645       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 19:45:01.363790       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 19:45:01.610013       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 19:45:01.659945       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 19:45:01.711972       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [8bdee30f7b308a0339fbe56206ca3d6a98e2801a472b45dc376f396eb6767b8b] <==
	I1213 19:43:57.376511       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 19:43:57.376521       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 19:43:57.376530       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 19:43:57.376538       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 19:43:57.376238       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 19:43:57.383277       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 19:43:57.383351       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 19:43:57.383366       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 19:43:57.384331       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 19:43:57.391622       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 19:43:57.401001       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 19:43:57.410546       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 19:43:57.411756       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 19:43:57.411777       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 19:43:57.412927       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 19:43:57.412953       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 19:43:57.421185       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 19:43:57.423554       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 19:43:57.423578       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 19:43:57.423585       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 19:43:57.424483       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 19:43:57.425776       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 19:43:57.427543       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 19:43:57.428709       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 19:44:42.381936       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [cf30731ee12b967c35a6cb52d0e3eb3ae3960ec63dd7bb09a968da9f43eebffb] <==
	I1213 19:45:01.317950       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 19:45:01.319616       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 19:45:01.319734       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 19:45:01.325584       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 19:45:01.326830       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 19:45:01.326853       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 19:45:01.326861       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 19:45:01.333504       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 19:45:01.334668       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 19:45:01.345328       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 19:45:01.349884       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 19:45:01.351485       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 19:45:01.351534       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 19:45:01.353103       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 19:45:01.353340       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 19:45:01.353843       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 19:45:01.357065       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 19:45:01.359191       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 19:45:01.361107       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 19:45:01.364212       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 19:45:01.365430       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 19:45:01.367685       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 19:45:01.371028       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 19:45:01.374551       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 19:45:01.385927       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [1a40f83317314c07f555fa39401a8be922e3f11b98c3806ff21541a00bbf5124] <==
	I1213 19:44:56.425135       1 server_linux.go:53] "Using iptables proxy"
	I1213 19:44:57.389129       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 19:44:58.189827       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 19:44:58.189895       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 19:44:58.190004       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:44:58.229686       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 19:44:58.229814       1 server_linux.go:132] "Using iptables Proxier"
	I1213 19:44:58.242182       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:44:58.242479       1 server.go:527] "Version info" version="v1.34.2"
	I1213 19:44:58.242551       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:44:58.256947       1 config.go:106] "Starting endpoint slice config controller"
	I1213 19:44:58.256974       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 19:44:58.257334       1 config.go:200] "Starting service config controller"
	I1213 19:44:58.257355       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 19:44:58.257710       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 19:44:58.257726       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 19:44:58.258158       1 config.go:309] "Starting node config controller"
	I1213 19:44:58.258178       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 19:44:58.258185       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 19:44:58.357764       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 19:44:58.357832       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 19:44:58.357845       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [1ed1dff88cccfc264be42d8a89f25edba5cdd04758cb56c2f4f47d5db62de61c] <==
	I1213 19:44:00.530378       1 server_linux.go:53] "Using iptables proxy"
	I1213 19:44:00.634918       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 19:44:00.737066       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 19:44:00.737107       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 19:44:00.737192       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:44:00.755851       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 19:44:00.755913       1 server_linux.go:132] "Using iptables Proxier"
	I1213 19:44:00.759505       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:44:00.759847       1 server.go:527] "Version info" version="v1.34.2"
	I1213 19:44:00.759921       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:44:00.762786       1 config.go:106] "Starting endpoint slice config controller"
	I1213 19:44:00.762863       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 19:44:00.763183       1 config.go:200] "Starting service config controller"
	I1213 19:44:00.763226       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 19:44:00.763579       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 19:44:00.815125       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 19:44:00.815154       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 19:44:00.763973       1 config.go:309] "Starting node config controller"
	I1213 19:44:00.815182       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 19:44:00.815187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 19:44:00.863425       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 19:44:00.863457       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [76981bd3c6c8f820b72bd027ca5829b5098f61b47ed859bd1e2fd64fa786a137] <==
	E1213 19:43:50.415628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 19:43:50.415802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 19:43:50.417468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 19:43:50.417607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 19:43:50.417675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 19:43:50.417729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 19:43:50.417768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 19:43:50.417812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 19:43:50.417850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 19:43:50.417936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 19:43:50.420406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 19:43:51.242877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 19:43:51.268319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 19:43:51.317637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 19:43:51.336860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 19:43:51.370508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 19:43:51.452930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 19:43:51.459317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1213 19:43:53.519484       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 19:44:44.568365       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 19:44:44.568388       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 19:44:44.568410       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 19:44:44.568432       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 19:44:44.568643       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 19:44:44.568658       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8f5ba1ee2810a03a1e4142f99dbd279938b0c93175c0f6e1e7cea4d27503ead4] <==
	I1213 19:44:54.974367       1 serving.go:386] Generated self-signed cert in-memory
	W1213 19:44:57.893929       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 19:44:57.894034       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 19:44:57.894069       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 19:44:57.894122       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 19:44:58.048753       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 19:44:58.048793       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:44:58.056298       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 19:44:58.057465       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 19:44:58.057123       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 19:44:58.057150       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 19:44:58.159385       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.320928    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="71f33a69a01b767ca6767be4048d30ea" pod="kube-system/etcd-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.321153    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd656f340533802e82e2ce167ea59578" pod="kube-system/kube-apiserver-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.321353    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d3e30aaf91ff4055a3b9e68a0817287a" pod="kube-system/kube-controller-manager-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.321601    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-rrvxm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="404728d3-6e60-4ffd-8fde-d04cd97b1d71" pod="kube-system/kindnet-rrvxm"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: I1213 19:44:53.323346    1319 scope.go:117] "RemoveContainer" containerID="1ed1dff88cccfc264be42d8a89f25edba5cdd04758cb56c2f4f47d5db62de61c"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.323788    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd656f340533802e82e2ce167ea59578" pod="kube-system/kube-apiserver-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.323962    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d3e30aaf91ff4055a3b9e68a0817287a" pod="kube-system/kube-controller-manager-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.324127    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wm755\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="448b8703-4d1c-436d-8066-34855c077030" pod="kube-system/kube-proxy-wm755"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.325376    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-rrvxm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="404728d3-6e60-4ffd-8fde-d04cd97b1d71" pod="kube-system/kindnet-rrvxm"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.325648    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ac4c33e3f03b6273c35146d2e13008e5" pod="kube-system/kube-scheduler-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.327229    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="71f33a69a01b767ca6767be4048d30ea" pod="kube-system/etcd-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: I1213 19:44:53.330834    1319 scope.go:117] "RemoveContainer" containerID="7220c98a72257ad1aafe49b3bb8b08900afa0ea714b2d4d6646ef31da20fa812"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.331830    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d3e30aaf91ff4055a3b9e68a0817287a" pod="kube-system/kube-controller-manager-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.332271    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wm755\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="448b8703-4d1c-436d-8066-34855c077030" pod="kube-system/kube-proxy-wm755"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.332621    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-rrvxm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="404728d3-6e60-4ffd-8fde-d04cd97b1d71" pod="kube-system/kindnet-rrvxm"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.332880    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-n9958\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="22e2b1b5-7a27-4ca8-89e0-cce8c2000a1d" pod="kube-system/coredns-66bc5c9577-n9958"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.333391    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ac4c33e3f03b6273c35146d2e13008e5" pod="kube-system/kube-scheduler-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.333622    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="71f33a69a01b767ca6767be4048d30ea" pod="kube-system/etcd-pause-327125"
	Dec 13 19:44:53 pause-327125 kubelet[1319]: E1213 19:44:53.333836    1319 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-327125\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd656f340533802e82e2ce167ea59578" pod="kube-system/kube-apiserver-pause-327125"
	Dec 13 19:44:57 pause-327125 kubelet[1319]: E1213 19:44:57.971909    1319 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-327125\" is forbidden: User \"system:node:pause-327125\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-327125' and this object" podUID="ac4c33e3f03b6273c35146d2e13008e5" pod="kube-system/kube-scheduler-pause-327125"
	Dec 13 19:44:57 pause-327125 kubelet[1319]: E1213 19:44:57.972820    1319 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-327125\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-327125' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 13 19:45:03 pause-327125 kubelet[1319]: W1213 19:45:03.248589    1319 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 13 19:45:13 pause-327125 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 19:45:13 pause-327125 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 19:45:13 pause-327125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-327125 -n pause-327125
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-327125 -n pause-327125: exit status 2 (356.843135ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-327125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (7200.072s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 20:16:32.972166    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/old-k8s-version-411093/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 20:16:42.459886    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (30m48s)
		TestNetworkPlugins/group/auto (1m4s)
		TestNetworkPlugins/group/auto/Start (1m4s)
		TestStartStop (33m27s)
		TestStartStop/group/no-preload (25m59s)
		TestStartStop/group/no-preload/serial (25m59s)
		TestStartStop/group/no-preload/serial/AddonExistsAfterStop (25s)

                                                
                                                
goroutine 5499 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 27 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000466700, 0x4000755bb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x400069a1c8, {0x534c680, 0x2c, 0x2c}, {0x4000755d08?, 0x125774?, 0x5375080?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x40007b6dc0)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x40007b6dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 3684 [chan receive, 31 minutes]:
testing.(*testState).waitParallel(0x40006db220)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001b0ce00)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001b0ce00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001b0ce00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001b0ce00, 0x4001679200)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3540
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3541 [chan receive, 1 minutes]:
testing.(*T).Run(0x4001508c40, {0x296d724?, 0x368adf0?}, 0x400074aea0)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001508c40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x4f4
testing.tRunner(0x4001508c40, 0x4001678080)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3540
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 650 [IO wait, 114 minutes]:
internal/poll.runtime_pollWait(0xffff4a61d400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40015ae500?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40015ae500)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40015ae500)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40003bec40)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40003bec40)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4000152900, {0x36d4000, 0x40003bec40})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4000152900)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 648
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 158 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082070}, 0x4000108f40, 0x4000108f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082070}, 0xa8?, 0x4000108f40, 0x4000108f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082070?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000670600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 178
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 159 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 158
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 157 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x4004fae850, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4004fae840)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f95560)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400155bdc0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082070?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082070}, 0x400010af38, {0x369e520, 0x400075e030}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e520?, 0x400075e030?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004ef00a0, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 178
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3540 [chan receive, 31 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40015081c0, 0x4000aec228)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3249
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 177 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40002772a0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 170
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4268 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4267
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4070 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082070}, 0x4001576740, 0x4004fe6f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082070}, 0x90?, 0x4001576740, 0x4001576788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082070?}, 0x400151de00?, 0x40002a3400?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000670480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4063
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 178 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f95560, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 170
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5460 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e65a8, 0x4000071a90}, {0x36d4660, 0x4001451ac0}, 0x1, 0x0, 0x4001363b00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e6618?, 0x40003536c0?}, 0x3b9aca00, 0x4001363d28?, 0x1, 0x4001363b00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e6618, 0x40003536c0}, 0x4001b0d180, {0x4004e5c540, 0x11}, {0x29941e1, 0x14}, {0x29ac150, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:380 +0x22c
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36e6618, 0x40003536c0}, 0x4001b0d180, {0x4004e5c540, 0x11}, {0x29786f9?, 0x175a090300161e84?}, {0x693dc995?, 0x400010df58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:285 +0xd4
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x4001b0d180?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x4001b0d180, 0x400161a000)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3793
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3618 [chan receive, 31 minutes]:
testing.(*testState).waitParallel(0x40006db220)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40015096c0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40015096c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40015096c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40015096c0, 0x4001678800)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3540
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4063 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40013e35c0, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4065
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4267 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082070}, 0x4001540f40, 0x4001540f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082070}, 0x50?, 0x4001540f40, 0x4001540f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082070?}, 0x400031e900?, 0x400043b900?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400031e600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4240
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3249 [chan receive, 31 minutes]:
testing.(*T).Run(0x4001b0c1c0, {0x296d71f?, 0x74c3eb90f7c?}, 0x4000aec228)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x4001b0c1c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x4001b0c1c0, 0x339baf0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3617 [chan receive, 31 minutes]:
testing.(*testState).waitParallel(0x40006db220)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001509340)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001509340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001509340)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001509340, 0x4001678780)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3540
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4071 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4070
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3802 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x4001691380?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3798
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4266 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x40016a2410, 0x1)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40016a2400)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001914ae0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001571f18?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082070?}, 0x4001571ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082070}, 0x4004fe0f38, {0x369e520, 0x40018f8030}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40018f8030?}, 0xe0?, 0x400034b8f0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001984030, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4240
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5377 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0xffff4a61d200, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400159a660?, 0x4001379341?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x400159a660, {0x4001379341, 0x4bf, 0x4bf})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x4000113088, {0x4001379341?, 0x4001381d48?, 0xcc76c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x400074b170, {0x369c8e8, 0x40017f0138})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x400074b170}, {0x369c8e8, 0x40017f0138}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4000113088?, {0x369cae0, 0x400074b170})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x4000113088, {0x369cae0, 0x400074b170})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x400074b170}, {0x369c968, 0x4000113088}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4001b0d340?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5360
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 2074 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x400031e780, 0x40019ed570)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1453
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5379 [select, 1 minutes]:
os/exec.(*Cmd).watchCtx(0x4000670780, 0x400155a930)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 5360
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1648 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x40016a3050, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40016a3040)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40019b0cc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40019ecb60?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082070?}, 0x4001382ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082070}, 0x40015ebf38, {0x369e520, 0x4000ad99e0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001382fa8?, {0x369e520?, 0x4000ad99e0?}, 0x90?, 0x6720202020202020?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400038b990, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1645
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1019 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x4001844000, 0x400160d340)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 772
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3484 [chan receive, 27 minutes]:
testing.(*T).Run(0x4001b0ddc0, {0x296eb91?, 0x0?}, 0x4001678400)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x4001b0ddc0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x4001b0ddc0, 0x40016a31c0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3480
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 841 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4000c32cd0, 0x2c)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4000c32cc0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40013fe120)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002f70a0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082070?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082070}, 0x4001551f38, {0x369e520, 0x4000611080}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x4000611080?}, 0x70?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40016643a0, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 855
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3804 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082070}, 0x40000a1740, 0x40000a1788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082070}, 0x88?, 0x40000a1740, 0x40000a1788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082070?}, 0x0?, 0x95c64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40013e9380?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3775
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3685 [chan receive, 31 minutes]:
testing.(*testState).waitParallel(0x40006db220)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001b0cfc0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001b0cfc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001b0cfc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001b0cfc0, 0x4001679280)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3540
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1070 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x40017be300, 0x40016ff6c0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1069
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3480 [chan receive, 1 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001b0d6c0, 0x339bd20)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3300
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 998 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x400160ec00, 0x400160c850)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 997
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1154 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0x40015f6c60)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1152
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 3620 [chan receive, 31 minutes]:
testing.(*testState).waitParallel(0x40006db220)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40014a1500)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40014a1500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40014a1500)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40014a1500, 0x4001678900)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3540
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1644 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40013de700?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1462
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4239 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40015028c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4238
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3775 [chan receive, 27 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40013e2840, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3798
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 855 [chan receive, 112 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40013fe120, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 853
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 842 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082070}, 0x40000a1740, 0x4004fe1f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082070}, 0x31?, 0x40000a1740, 0x40000a1788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082070?}, 0x40000f7500?, 0x400043b900?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000671e00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 855
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1645 [chan receive, 82 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40019b0cc0, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1462
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 843 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 842
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 854 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x4000671c80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 853
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1155 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0x40015f6c60)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1152
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 3793 [chan receive]:
testing.(*T).Run(0x4001b0d500, {0x2994231?, 0x40000006ee?}, 0x400161a000)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x4001b0d500)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x4001b0d500, 0x4001678400)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3484
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1282 [IO wait, 109 minutes]:
internal/poll.runtime_pollWait(0xffff4a61d800, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40016ec880?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40016ec880)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40016ec880)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40018ace80)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40018ace80)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4001922100, {0x36d4000, 0x40018ace80})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4001922100)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1248
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 3619 [chan receive, 31 minutes]:
testing.(*testState).waitParallel(0x40006db220)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001509a40)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001509a40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001509a40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001509a40, 0x4001678880)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3540
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4240 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001914ae0, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4238
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5360 [syscall, 1 minutes]:
syscall.Syscall6(0x5f, 0x3, 0x11, 0x40014e8c38, 0x4, 0x40019a03f0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x40014e8d98?, 0x1929a0?, 0xffffe2e311a7?, 0x0?, 0x4004f322c0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x40016a2280)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x40014e8d68?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4000670780)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4000670780)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x4001b0d340, 0x4000670780)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:104 +0x154
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0x4001b0d340)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x44
testing.tRunner(0x4001b0d340, 0x400074aea0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3541
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3805 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3804
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1665 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082070}, 0x40014e3f40, 0x4004fe5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082070}, 0xe8?, 0x40014e3f40, 0x40014e3f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082070?}, 0x400155fe00?, 0x40000fdcc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40013e9200?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1645
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1666 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1665
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4069 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x40016a2ed0, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40016a2ec0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40013e35c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400017ab60?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082070?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082070}, 0x4004fe4f38, {0x369e520, 0x40016d4ea0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40016d4ea0?}, 0xd0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40019848f0, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4063
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3300 [chan receive, 35 minutes]:
testing.(*T).Run(0x40015088c0, {0x296d71f?, 0x40015e8f58?}, 0x339bd20)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x40015088c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x40015088c0, 0x339bb38)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3803 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40016a2b10, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40016a2b00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40013e2840)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400044af50?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082070?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082070}, 0x40015e4f38, {0x369e520, 0x4001684480}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x4001684480?}, 0x90?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40015df3f0, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3775
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1996 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x40000f6780, 0x40015c6a10)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1995
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5378 [IO wait]:
internal/poll.runtime_pollWait(0xffff4a61d600, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400159a720?, 0x4001716484?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x400159a720, {0x4001716484, 0x9b7c, 0x9b7c})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40001130b8, {0x4001716484?, 0x4001571548?, 0xcc76c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x400074b230, {0x369c8e8, 0x40017f0140})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x400074b230}, {0x369c8e8, 0x40017f0140}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40001130b8?, {0x369cae0, 0x400074b230})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40001130b8, {0x369cae0, 0x400074b230})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x400074b230}, {0x369c968, 0x40001130b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4000670600?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5360
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 2049 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x40013e9080, 0x400155b0a0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2048
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4062 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x4001502c40?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4065
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                    

Test pass (236/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.35
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 5.47
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-beta.0/json-events 4.55
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.24
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.65
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 157.7
40 TestAddons/serial/GCPAuth/Namespaces 0.27
41 TestAddons/serial/GCPAuth/FakeCredentials 9.88
57 TestAddons/StoppedEnableDisable 12.42
58 TestCertOptions 33.74
59 TestCertExpiration 241.44
61 TestForceSystemdFlag 35.97
62 TestForceSystemdEnv 36.77
67 TestErrorSpam/setup 32.41
68 TestErrorSpam/start 0.8
69 TestErrorSpam/status 1.13
70 TestErrorSpam/pause 6.14
71 TestErrorSpam/unpause 6.02
72 TestErrorSpam/stop 1.53
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 80.74
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 26.46
79 TestFunctional/serial/KubeContext 0.07
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.67
84 TestFunctional/serial/CacheCmd/cache/add_local 1.24
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
86 TestFunctional/serial/CacheCmd/cache/list 0.05
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.9
89 TestFunctional/serial/CacheCmd/cache/delete 0.1
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 40.72
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.48
95 TestFunctional/serial/LogsFileCmd 1.53
96 TestFunctional/serial/InvalidService 4.62
98 TestFunctional/parallel/ConfigCmd 0.45
99 TestFunctional/parallel/DashboardCmd 10.93
100 TestFunctional/parallel/DryRun 0.45
101 TestFunctional/parallel/InternationalLanguage 0.21
102 TestFunctional/parallel/StatusCmd 1.16
106 TestFunctional/parallel/ServiceCmdConnect 7.58
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 18.87
110 TestFunctional/parallel/SSHCmd 0.7
111 TestFunctional/parallel/CpCmd 2.5
113 TestFunctional/parallel/FileSync 0.43
114 TestFunctional/parallel/CertSync 2.17
118 TestFunctional/parallel/NodeLabels 0.12
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.87
122 TestFunctional/parallel/License 0.33
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
136 TestFunctional/parallel/ProfileCmd/profile_list 0.42
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
138 TestFunctional/parallel/MountCmd/any-port 8.51
139 TestFunctional/parallel/ServiceCmd/List 0.6
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
142 TestFunctional/parallel/ServiceCmd/Format 0.39
143 TestFunctional/parallel/ServiceCmd/URL 0.38
144 TestFunctional/parallel/MountCmd/specific-port 2.16
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.81
146 TestFunctional/parallel/Version/short 0.08
147 TestFunctional/parallel/Version/components 1.05
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
152 TestFunctional/parallel/ImageCommands/ImageBuild 3.98
153 TestFunctional/parallel/ImageCommands/Setup 0.62
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.09
156 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.34
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.49
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.45
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.09
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.31
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.87
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.93
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.96
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.46
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.2
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.86
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.08
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.35
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 2.3
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.68
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.45
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.05
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.49
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.22
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.24
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.81
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.26
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.55
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.03
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.34
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.14
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.5
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.65
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.89
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.52
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.55
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.56
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.78
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.84
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.09
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.03
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 202.92
265 TestMultiControlPlane/serial/DeployApp 6.65
266 TestMultiControlPlane/serial/PingHostFromPods 1.56
267 TestMultiControlPlane/serial/AddWorkerNode 61.71
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.06
270 TestMultiControlPlane/serial/CopyFile 20.3
271 TestMultiControlPlane/serial/StopSecondaryNode 13.01
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
273 TestMultiControlPlane/serial/RestartSecondaryNode 31.31
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.35
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 147.94
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12.13
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
278 TestMultiControlPlane/serial/StopCluster 36.17
287 TestJSONOutput/start/Command 82.03
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.86
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 42.46
313 TestKicCustomNetwork/use_default_bridge_network 35.52
314 TestKicExistingNetwork 33.27
315 TestKicCustomSubnet 35.84
316 TestKicStaticIP 34.2
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 70.68
321 TestMountStart/serial/StartWithMountFirst 9.44
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 9.13
324 TestMountStart/serial/VerifyMountSecond 0.31
325 TestMountStart/serial/DeleteFirst 1.7
326 TestMountStart/serial/VerifyMountPostDelete 0.27
327 TestMountStart/serial/Stop 1.33
328 TestMountStart/serial/RestartStopped 8.33
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 137.5
333 TestMultiNode/serial/DeployApp2Nodes 4.76
334 TestMultiNode/serial/PingHostFrom2Pods 0.99
335 TestMultiNode/serial/AddNode 57.93
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.74
338 TestMultiNode/serial/CopyFile 10.62
339 TestMultiNode/serial/StopNode 2.46
340 TestMultiNode/serial/StartAfterStop 8.36
341 TestMultiNode/serial/RestartKeepsNodes 80.21
342 TestMultiNode/serial/DeleteNode 5.68
343 TestMultiNode/serial/StopMultiNode 24.05
344 TestMultiNode/serial/RestartMultiNode 54.9
345 TestMultiNode/serial/ValidateNameConflict 35.5
350 TestPreload 118.56
352 TestScheduledStopUnix 107.68
355 TestInsufficientStorage 13.15
356 TestRunningBinaryUpgrade 54.59
359 TestMissingContainerUpgrade 125.72
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 48.4
363 TestNoKubernetes/serial/StartWithStopK8s 8.14
364 TestNoKubernetes/serial/Start 9.71
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
367 TestNoKubernetes/serial/ProfileList 3.05
368 TestNoKubernetes/serial/Stop 1.29
369 TestNoKubernetes/serial/StartNoArgs 7.13
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
371 TestStoppedBinaryUpgrade/Setup 2.04
372 TestStoppedBinaryUpgrade/Upgrade 301.15
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.76
382 TestPause/serial/Start 84.11
383 TestPause/serial/SecondStartNoReconfiguration 30.26
x
+
TestDownloadOnly/v1.28.0/json-events (6.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-682129 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-682129 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.354189611s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 18:16:52.923281    4637 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1213 18:16:52.923355    4637 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-682129
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-682129: exit status 85 (95.467178ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-682129 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-682129 │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:16:46
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:16:46.613192    4643 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:16:46.613323    4643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:16:46.613336    4643 out.go:374] Setting ErrFile to fd 2...
	I1213 18:16:46.613342    4643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:16:46.613622    4643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	W1213 18:16:46.613763    4643 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22122-2686/.minikube/config/config.json: open /home/jenkins/minikube-integration/22122-2686/.minikube/config/config.json: no such file or directory
	I1213 18:16:46.614157    4643 out.go:368] Setting JSON to true
	I1213 18:16:46.614944    4643 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3559,"bootTime":1765646248,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:16:46.615013    4643 start.go:143] virtualization:  
	I1213 18:16:46.620664    4643 out.go:99] [download-only-682129] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1213 18:16:46.620839    4643 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 18:16:46.620909    4643 notify.go:221] Checking for updates...
	I1213 18:16:46.624084    4643 out.go:171] MINIKUBE_LOCATION=22122
	I1213 18:16:46.627467    4643 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:16:46.630515    4643 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:16:46.633653    4643 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:16:46.636915    4643 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 18:16:46.642812    4643 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 18:16:46.643072    4643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:16:46.672053    4643 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:16:46.672169    4643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:16:47.086391    4643 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-13 18:16:47.076769415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:16:47.086489    4643 docker.go:319] overlay module found
	I1213 18:16:47.089574    4643 out.go:99] Using the docker driver based on user configuration
	I1213 18:16:47.089616    4643 start.go:309] selected driver: docker
	I1213 18:16:47.089623    4643 start.go:927] validating driver "docker" against <nil>
	I1213 18:16:47.089735    4643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:16:47.158170    4643 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-13 18:16:47.145678692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:16:47.158324    4643 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 18:16:47.158635    4643 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 18:16:47.158799    4643 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 18:16:47.161964    4643 out.go:171] Using Docker driver with root privileges
	I1213 18:16:47.164741    4643 cni.go:84] Creating CNI manager for ""
	I1213 18:16:47.164800    4643 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:16:47.164813    4643 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 18:16:47.164902    4643 start.go:353] cluster config:
	{Name:download-only-682129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-682129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:16:47.167829    4643 out.go:99] Starting "download-only-682129" primary control-plane node in "download-only-682129" cluster
	I1213 18:16:47.167849    4643 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:16:47.170616    4643 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:16:47.170658    4643 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 18:16:47.170815    4643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:16:47.186407    4643 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 18:16:47.186611    4643 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 18:16:47.186710    4643 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 18:16:47.225178    4643 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1213 18:16:47.225210    4643 cache.go:65] Caching tarball of preloaded images
	I1213 18:16:47.225389    4643 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 18:16:47.228682    4643 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1213 18:16:47.228710    4643 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1213 18:16:47.313992    4643 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1213 18:16:47.314129    4643 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-682129 host does not exist
	  To start a cluster, run: "minikube start -p download-only-682129"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-682129
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (5.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-380287 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-380287 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.466314166s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (5.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 18:16:58.834030    4637 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 18:16:58.834065    4637 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-380287
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-380287: exit status 85 (88.793979ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-682129 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-682129 │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │ 13 Dec 25 18:16 UTC │
	│ delete  │ -p download-only-682129                                                                                                                                                   │ download-only-682129 │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │ 13 Dec 25 18:16 UTC │
	│ start   │ -o=json --download-only -p download-only-380287 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-380287 │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:16:53
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:16:53.415557    4842 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:16:53.415801    4842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:16:53.415830    4842 out.go:374] Setting ErrFile to fd 2...
	I1213 18:16:53.415849    4842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:16:53.416191    4842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:16:53.416738    4842 out.go:368] Setting JSON to true
	I1213 18:16:53.417706    4842 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3566,"bootTime":1765646248,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:16:53.417813    4842 start.go:143] virtualization:  
	I1213 18:16:53.421372    4842 out.go:99] [download-only-380287] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:16:53.421673    4842 notify.go:221] Checking for updates...
	I1213 18:16:53.425375    4842 out.go:171] MINIKUBE_LOCATION=22122
	I1213 18:16:53.428784    4842 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:16:53.431901    4842 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:16:53.434811    4842 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:16:53.437754    4842 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 18:16:53.443598    4842 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 18:16:53.443877    4842 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:16:53.464609    4842 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:16:53.464713    4842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:16:53.536448    4842 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-13 18:16:53.527252777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:16:53.536551    4842 docker.go:319] overlay module found
	I1213 18:16:53.539514    4842 out.go:99] Using the docker driver based on user configuration
	I1213 18:16:53.539561    4842 start.go:309] selected driver: docker
	I1213 18:16:53.539569    4842 start.go:927] validating driver "docker" against <nil>
	I1213 18:16:53.539688    4842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:16:53.591287    4842 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-13 18:16:53.582410844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:16:53.591447    4842 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 18:16:53.591713    4842 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 18:16:53.591859    4842 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 18:16:53.595081    4842 out.go:171] Using Docker driver with root privileges
	I1213 18:16:53.598025    4842 cni.go:84] Creating CNI manager for ""
	I1213 18:16:53.598101    4842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 18:16:53.598116    4842 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 18:16:53.598195    4842 start.go:353] cluster config:
	{Name:download-only-380287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-380287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:16:53.601193    4842 out.go:99] Starting "download-only-380287" primary control-plane node in "download-only-380287" cluster
	I1213 18:16:53.601221    4842 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 18:16:53.604246    4842 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 18:16:53.604294    4842 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 18:16:53.604340    4842 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 18:16:53.620160    4842 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 18:16:53.620304    4842 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 18:16:53.620326    4842 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 18:16:53.620331    4842 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 18:16:53.620338    4842 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 18:16:53.653172    4842 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 18:16:53.653203    4842 cache.go:65] Caching tarball of preloaded images
	I1213 18:16:53.653386    4842 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 18:16:53.656541    4842 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1213 18:16:53.656564    4842 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1213 18:16:53.742302    4842 preload.go:295] Got checksum from GCS API "36a1245638f6169d426638fac0bd307d"
	I1213 18:16:53.742358    4842 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:36a1245638f6169d426638fac0bd307d -> /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 18:16:58.118681    4842 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 18:16:58.119073    4842 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/download-only-380287/config.json ...
	I1213 18:16:58.119108    4842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/download-only-380287/config.json: {Name:mk327d9c159af47665c0e1d5886356ae70aedfdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 18:16:58.119303    4842 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 18:16:58.119500    4842 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22122-2686/.minikube/cache/linux/arm64/v1.34.2/kubectl
	
	
	* The control-plane node download-only-380287 host does not exist
	  To start a cluster, run: "minikube start -p download-only-380287"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-380287
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (4.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-512620 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-512620 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.55142565s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (4.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 18:17:03.823292    4637 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1213 18:17:03.823328    4637 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-512620
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-512620: exit status 85 (89.403382ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-682129 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-682129 │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │ 13 Dec 25 18:16 UTC │
	│ delete  │ -p download-only-682129                                                                                                                                                          │ download-only-682129 │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │ 13 Dec 25 18:16 UTC │
	│ start   │ -o=json --download-only -p download-only-380287 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-380287 │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │ 13 Dec 25 18:16 UTC │
	│ delete  │ -p download-only-380287                                                                                                                                                          │ download-only-380287 │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │ 13 Dec 25 18:16 UTC │
	│ start   │ -o=json --download-only -p download-only-512620 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-512620 │ jenkins │ v1.37.0 │ 13 Dec 25 18:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 18:16:59
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 18:16:59.309739    5043 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:16:59.309854    5043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:16:59.309866    5043 out.go:374] Setting ErrFile to fd 2...
	I1213 18:16:59.309871    5043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:16:59.310111    5043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:16:59.310488    5043 out.go:368] Setting JSON to true
	I1213 18:16:59.311245    5043 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3572,"bootTime":1765646248,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:16:59.311309    5043 start.go:143] virtualization:  
	I1213 18:16:59.314627    5043 out.go:99] [download-only-512620] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:16:59.314838    5043 notify.go:221] Checking for updates...
	I1213 18:16:59.319288    5043 out.go:171] MINIKUBE_LOCATION=22122
	I1213 18:16:59.322347    5043 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:16:59.325277    5043 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:16:59.328227    5043 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:16:59.331110    5043 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 18:16:59.336809    5043 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 18:16:59.337103    5043 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:16:59.368500    5043 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:16:59.368605    5043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:16:59.427441    5043 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 18:16:59.418631098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:16:59.427551    5043 docker.go:319] overlay module found
	I1213 18:16:59.430555    5043 out.go:99] Using the docker driver based on user configuration
	I1213 18:16:59.430592    5043 start.go:309] selected driver: docker
	I1213 18:16:59.430599    5043 start.go:927] validating driver "docker" against <nil>
	I1213 18:16:59.430696    5043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:16:59.490197    5043 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 18:16:59.480630103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:16:59.490349    5043 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 18:16:59.490607    5043 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 18:16:59.490758    5043 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 18:16:59.493946    5043 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-512620 host does not exist
	  To start a cluster, run: "minikube start -p download-only-512620"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-512620
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 18:17:05.230012    4637 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-542781 --alsologtostderr --binary-mirror http://127.0.0.1:45875 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-542781" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-542781
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-377325
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-377325: exit status 85 (76.471817ms)

                                                
                                                
-- stdout --
	* Profile "addons-377325" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-377325"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-377325
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-377325: exit status 85 (80.805869ms)

                                                
                                                
-- stdout --
	* Profile "addons-377325" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-377325"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (157.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-377325 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-377325 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m37.694112148s)
--- PASS: TestAddons/Setup (157.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-377325 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-377325 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.88s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-377325 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-377325 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [4987a62a-ffa1-4bce-ada0-94e799629c3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [4987a62a-ffa1-4bce-ada0-94e799629c3e] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003285151s
addons_test.go:696: (dbg) Run:  kubectl --context addons-377325 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-377325 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-377325 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-377325 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-377325
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-377325: (12.16370458s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-377325
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-377325
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-377325
--- PASS: TestAddons/StoppedEnableDisable (12.42s)

                                                
                                    
x
+
TestCertOptions (33.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-728186 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-728186 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (30.877991678s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-728186 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-728186 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-728186 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-728186" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-728186
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-728186: (2.093123622s)
--- PASS: TestCertOptions (33.74s)

                                                
                                    
x
+
TestCertExpiration (241.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-609685 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-609685 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (36.465738406s)
E1213 19:47:48.009741    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:49:28.839817    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-609685 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-609685 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (22.016197569s)
helpers_test.go:176: Cleaning up "cert-expiration-609685" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-609685
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-609685: (2.95817332s)
--- PASS: TestCertExpiration (241.44s)

                                                
                                    
x
+
TestForceSystemdFlag (35.97s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-477875 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-477875 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.118076676s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-477875 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-477875" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-477875
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-477875: (2.544500818s)
--- PASS: TestForceSystemdFlag (35.97s)

                                                
                                    
x
+
TestForceSystemdEnv (36.77s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-215695 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1213 19:46:42.459512    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-215695 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.179247758s)
helpers_test.go:176: Cleaning up "force-systemd-env-215695" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-215695
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-215695: (2.588862005s)
--- PASS: TestForceSystemdEnv (36.77s)

                                                
                                    
x
+
TestErrorSpam/setup (32.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-304074 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-304074 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-304074 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-304074 --driver=docker  --container-runtime=crio: (32.414182605s)
--- PASS: TestErrorSpam/setup (32.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (6.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 pause: exit status 80 (2.368265702s)

                                                
                                                
-- stdout --
	* Pausing node nospam-304074 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:23:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 pause: exit status 80 (2.21805576s)

                                                
                                                
-- stdout --
	* Pausing node nospam-304074 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:23:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 pause: exit status 80 (1.55385126s)

                                                
                                                
-- stdout --
	* Pausing node nospam-304074 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:23:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.02s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 unpause: exit status 80 (1.83171971s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-304074 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:23:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 unpause: exit status 80 (2.241657462s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-304074 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:23:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 unpause: exit status 80 (1.950058486s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-304074 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T18:23:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.02s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 stop: (1.319103838s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-304074 --log_dir /tmp/nospam-304074 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.74s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-350101 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1213 18:24:44.927288    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:24:44.933721    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:24:44.945113    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:24:44.966567    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:24:45.007972    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:24:45.097116    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:24:45.259420    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:24:45.581207    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:24:46.222792    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:24:47.504307    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:24:50.067566    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:24:55.189155    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:25:05.430497    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-350101 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.742315277s)
--- PASS: TestFunctional/serial/StartWithProxy (80.74s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 18:25:17.974867    4637 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-350101 --alsologtostderr -v=8
E1213 18:25:25.912320    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-350101 --alsologtostderr -v=8: (26.454131656s)
functional_test.go:678: soft start took 26.459454611s for "functional-350101" cluster.
I1213 18:25:44.429316    4637 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (26.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-350101 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-350101 cache add registry.k8s.io/pause:3.1: (1.313677971s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-350101 cache add registry.k8s.io/pause:3.3: (1.213838504s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-350101 cache add registry.k8s.io/pause:latest: (1.137786224s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-350101 /tmp/TestFunctionalserialCacheCmdcacheadd_local72614287/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 cache add minikube-local-cache-test:functional-350101
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 cache delete minikube-local-cache-test:functional-350101
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-350101
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-350101 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (309.709292ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 kubectl -- --context functional-350101 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-350101 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.72s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-350101 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 18:26:06.875290    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-350101 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.721642618s)
functional_test.go:776: restart took 40.721759606s for "functional-350101" cluster.
I1213 18:26:32.929364    4637 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (40.72s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-350101 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-350101 logs: (1.47968775s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 logs --file /tmp/TestFunctionalserialLogsFileCmd883952378/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-350101 logs --file /tmp/TestFunctionalserialLogsFileCmd883952378/001/logs.txt: (1.525974364s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-350101 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-350101
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-350101: exit status 115 (386.309853ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31177 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-350101 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-350101 config get cpus: exit status 14 (75.161ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-350101 config get cpus: exit status 14 (65.285995ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-350101 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-350101 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 29545: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.93s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-350101 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-350101 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (203.361446ms)

                                                
                                                
-- stdout --
	* [functional-350101] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:27:08.772327   29271 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:27:08.772715   29271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:27:08.772730   29271 out.go:374] Setting ErrFile to fd 2...
	I1213 18:27:08.772737   29271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:27:08.773311   29271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:27:08.773878   29271 out.go:368] Setting JSON to false
	I1213 18:27:08.775032   29271 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4181,"bootTime":1765646248,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:27:08.775148   29271 start.go:143] virtualization:  
	I1213 18:27:08.778316   29271 out.go:179] * [functional-350101] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:27:08.782088   29271 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:27:08.782232   29271 notify.go:221] Checking for updates...
	I1213 18:27:08.788357   29271 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:27:08.791305   29271 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:27:08.794302   29271 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:27:08.797206   29271 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:27:08.800090   29271 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:27:08.803522   29271 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:27:08.804184   29271 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:27:08.834602   29271 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:27:08.834783   29271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:27:08.896045   29271 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 18:27:08.886101765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:27:08.896163   29271 docker.go:319] overlay module found
	I1213 18:27:08.899255   29271 out.go:179] * Using the docker driver based on existing profile
	I1213 18:27:08.902175   29271 start.go:309] selected driver: docker
	I1213 18:27:08.902201   29271 start.go:927] validating driver "docker" against &{Name:functional-350101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-350101 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:27:08.902308   29271 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:27:08.905785   29271 out.go:203] 
	W1213 18:27:08.908621   29271 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 18:27:08.911511   29271 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-350101 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-350101 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-350101 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (212.144462ms)

                                                
                                                
-- stdout --
	* [functional-350101] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:27:08.567919   29223 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:27:08.568043   29223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:27:08.568049   29223 out.go:374] Setting ErrFile to fd 2...
	I1213 18:27:08.568055   29223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:27:08.569084   29223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:27:08.569501   29223 out.go:368] Setting JSON to false
	I1213 18:27:08.570377   29223 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4181,"bootTime":1765646248,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:27:08.570462   29223 start.go:143] virtualization:  
	I1213 18:27:08.574263   29223 out.go:179] * [functional-350101] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 18:27:08.577608   29223 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:27:08.577671   29223 notify.go:221] Checking for updates...
	I1213 18:27:08.583551   29223 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:27:08.587245   29223 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:27:08.590253   29223 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:27:08.593216   29223 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:27:08.596036   29223 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:27:08.599633   29223 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 18:27:08.600305   29223 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:27:08.625663   29223 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:27:08.625773   29223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:27:08.692118   29223 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 18:27:08.682807461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:27:08.692224   29223 docker.go:319] overlay module found
	I1213 18:27:08.695334   29223 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 18:27:08.698222   29223 start.go:309] selected driver: docker
	I1213 18:27:08.698252   29223 start.go:927] validating driver "docker" against &{Name:functional-350101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-350101 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:27:08.699175   29223 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:27:08.702712   29223 out.go:203] 
	W1213 18:27:08.705690   29223 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 18:27:08.708567   29223 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-350101 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-350101 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-vw4gx" [f7d02306-9732-433d-ba63-37b0cc68a7c9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-vw4gx" [f7d02306-9732-433d-ba63-37b0cc68a7c9] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003993094s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32739
functional_test.go:1680: http://192.168.49.2:32739: success! body:
Request served by hello-node-connect-7d85dfc575-vw4gx

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32739
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (18.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [a1240810-51c4-40a3-a64c-cafafc3480ba] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003602271s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-350101 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-350101 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-350101 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-350101 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [3ea6a573-004e-4e4f-8ff6-b1a14c844e95] Pending
helpers_test.go:353: "sp-pod" [3ea6a573-004e-4e4f-8ff6-b1a14c844e95] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003731441s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-350101 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-350101 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-350101 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [1c01de26-e048-444e-ad97-fe65aeb68543] Pending
helpers_test.go:353: "sp-pod" [1c01de26-e048-444e-ad97-fe65aeb68543] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.002905309s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-350101 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (18.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh -n functional-350101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 cp functional-350101:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd684110238/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh -n functional-350101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh -n functional-350101 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4637/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "sudo cat /etc/test/nested/copy/4637/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4637.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "sudo cat /etc/ssl/certs/4637.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4637.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "sudo cat /usr/share/ca-certificates/4637.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/46372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "sudo cat /etc/ssl/certs/46372.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/46372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "sudo cat /usr/share/ca-certificates/46372.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-350101 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-350101 ssh "sudo systemctl is-active docker": exit status 1 (410.744008ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-350101 ssh "sudo systemctl is-active containerd": exit status 1 (459.061042ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-350101 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-350101 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-350101 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-350101 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 27232: os: process already finished
helpers_test.go:520: unable to terminate pid 27035: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-350101 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-350101 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [fb1017b1-7ec4-41a7-9e73-9f8db981534f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [fb1017b1-7ec4-41a7-9e73-9f8db981534f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003863647s
I1213 18:26:50.903286    4637 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-350101 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.89.176 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-350101 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-350101 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-350101 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-rhn4x" [0a066f11-1f6f-416f-9bf5-5f025c314ac5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-rhn4x" [0a066f11-1f6f-416f-9bf5-5f025c314ac5] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.007064684s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "370.055355ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "53.073584ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "372.454025ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "63.195901ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-350101 /tmp/TestFunctionalparallelMountCmdany-port477167925/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765650423323671314" to /tmp/TestFunctionalparallelMountCmdany-port477167925/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765650423323671314" to /tmp/TestFunctionalparallelMountCmdany-port477167925/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765650423323671314" to /tmp/TestFunctionalparallelMountCmdany-port477167925/001/test-1765650423323671314
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-350101 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (372.08962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 18:27:03.696997    4637 retry.go:31] will retry after 575.505591ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 18:27 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 18:27 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 18:27 test-1765650423323671314
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh cat /mount-9p/test-1765650423323671314
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-350101 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [3f1c77c2-301d-4769-8b20-56d340186634] Pending
helpers_test.go:353: "busybox-mount" [3f1c77c2-301d-4769-8b20-56d340186634] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [3f1c77c2-301d-4769-8b20-56d340186634] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [3f1c77c2-301d-4769-8b20-56d340186634] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005099081s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-350101 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-350101 /tmp/TestFunctionalparallelMountCmdany-port477167925/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 service list -o json
functional_test.go:1504: Took "547.166898ms" to run "out/minikube-linux-arm64 -p functional-350101 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31091
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31091
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-350101 /tmp/TestFunctionalparallelMountCmdspecific-port3575547662/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-350101 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (541.190766ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 18:27:12.371758    4637 retry.go:31] will retry after 346.616864ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-350101 /tmp/TestFunctionalparallelMountCmdspecific-port3575547662/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-350101 ssh "sudo umount -f /mount-9p": exit status 1 (356.071277ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-350101 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-350101 /tmp/TestFunctionalparallelMountCmdspecific-port3575547662/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-350101 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2882954042/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-350101 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2882954042/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-350101 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2882954042/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-350101 ssh "findmnt -T" /mount1: exit status 1 (980.776351ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 18:27:14.978113    4637 retry.go:31] will retry after 691.838593ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-350101 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-350101 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2882954042/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-350101 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2882954042/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-350101 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2882954042/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-350101 version -o=json --components: (1.053688155s)
--- PASS: TestFunctional/parallel/Version/components (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-350101 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-350101
localhost/kicbase/echo-server:functional-350101
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-350101 image ls --format short --alsologtostderr:
I1213 18:27:24.843017   32135 out.go:360] Setting OutFile to fd 1 ...
I1213 18:27:24.843141   32135 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:27:24.843194   32135 out.go:374] Setting ErrFile to fd 2...
I1213 18:27:24.843214   32135 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:27:24.843495   32135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
I1213 18:27:24.844097   32135 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 18:27:24.844261   32135 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 18:27:24.844787   32135 cli_runner.go:164] Run: docker container inspect functional-350101 --format={{.State.Status}}
I1213 18:27:24.864080   32135 ssh_runner.go:195] Run: systemctl --version
I1213 18:27:24.864128   32135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-350101
I1213 18:27:24.882434   32135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-350101/id_rsa Username:docker}
I1213 18:27:25.005427   32135 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-350101 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-350101  │ ce2d2cda2d858 │ 4.79MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/minikube-local-cache-test     │ functional-350101  │ 26cfd7911405e │ 3.33kB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ 10afed3caf3ee │ 55.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-350101 image ls --format table --alsologtostderr:
I1213 18:27:25.396556   32306 out.go:360] Setting OutFile to fd 1 ...
I1213 18:27:25.396707   32306 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:27:25.396719   32306 out.go:374] Setting ErrFile to fd 2...
I1213 18:27:25.396724   32306 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:27:25.396982   32306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
I1213 18:27:25.397656   32306 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 18:27:25.397775   32306 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 18:27:25.398318   32306 cli_runner.go:164] Run: docker container inspect functional-350101 --format={{.State.Status}}
I1213 18:27:25.428270   32306 ssh_runner.go:195] Run: systemctl --version
I1213 18:27:25.428340   32306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-350101
I1213 18:27:25.458715   32306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-350101/id_rsa Username:docker}
I1213 18:27:25.566161   32306 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-350101 image ls --format json --alsologtostderr:
[{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-350101"],"size":"4789170"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b4610899694
49f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d","public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077248"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8
s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569
b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","
repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"26cfd7911405e248b68801d433969048d3a4887978c11dfc7449a9513f160a82","repoDigests":["localhost/minikube-local-cache-test@sha256:d413e43cea7bcd8a6f3b714a871727dee011e561a0ab319fa3ba1181eaf6d026"],"repoTags":["localhost/minikube-local-cache-test:functional-350101"],"size":"3330"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843
ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa6
50982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-350101 image ls --format json --alsologtostderr:
I1213 18:27:25.135719   32212 out.go:360] Setting OutFile to fd 1 ...
I1213 18:27:25.136283   32212 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:27:25.136316   32212 out.go:374] Setting ErrFile to fd 2...
I1213 18:27:25.136336   32212 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:27:25.136624   32212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
I1213 18:27:25.137350   32212 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 18:27:25.137519   32212 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 18:27:25.138080   32212 cli_runner.go:164] Run: docker container inspect functional-350101 --format={{.State.Status}}
I1213 18:27:25.169197   32212 ssh_runner.go:195] Run: systemctl --version
I1213 18:27:25.169255   32212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-350101
I1213 18:27:25.191732   32212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-350101/id_rsa Username:docker}
I1213 18:27:25.310090   32212 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-350101 image ls --format yaml --alsologtostderr:
- id: 26cfd7911405e248b68801d433969048d3a4887978c11dfc7449a9513f160a82
repoDigests:
- localhost/minikube-local-cache-test@sha256:d413e43cea7bcd8a6f3b714a871727dee011e561a0ab319fa3ba1181eaf6d026
repoTags:
- localhost/minikube-local-cache-test:functional-350101
size: "3330"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-350101
size: "4789170"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077248"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-350101 image ls --format yaml --alsologtostderr:
I1213 18:27:24.842689   32136 out.go:360] Setting OutFile to fd 1 ...
I1213 18:27:24.842855   32136 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:27:24.842882   32136 out.go:374] Setting ErrFile to fd 2...
I1213 18:27:24.842889   32136 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:27:24.843254   32136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
I1213 18:27:24.844003   32136 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 18:27:24.844186   32136 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 18:27:24.844801   32136 cli_runner.go:164] Run: docker container inspect functional-350101 --format={{.State.Status}}
I1213 18:27:24.863162   32136 ssh_runner.go:195] Run: systemctl --version
I1213 18:27:24.863225   32136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-350101
I1213 18:27:24.882003   32136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-350101/id_rsa Username:docker}
I1213 18:27:24.995847   32136 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-350101 ssh pgrep buildkitd: exit status 1 (364.084546ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image build -t localhost/my-image:functional-350101 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-350101 image build -t localhost/my-image:functional-350101 testdata/build --alsologtostderr: (3.374215531s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-350101 image build -t localhost/my-image:functional-350101 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> decdac1cf95
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-350101
--> 054a06f1c42
Successfully tagged localhost/my-image:functional-350101
054a06f1c42ce75bcc27f33dcaa2fb20aa665a14f5164db2b4855d7d1511f6c1
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-350101 image build -t localhost/my-image:functional-350101 testdata/build --alsologtostderr:
I1213 18:27:25.482682   32318 out.go:360] Setting OutFile to fd 1 ...
I1213 18:27:25.482951   32318 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:27:25.482980   32318 out.go:374] Setting ErrFile to fd 2...
I1213 18:27:25.483000   32318 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:27:25.483326   32318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
I1213 18:27:25.484002   32318 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 18:27:25.484715   32318 config.go:182] Loaded profile config "functional-350101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 18:27:25.485305   32318 cli_runner.go:164] Run: docker container inspect functional-350101 --format={{.State.Status}}
I1213 18:27:25.503964   32318 ssh_runner.go:195] Run: systemctl --version
I1213 18:27:25.504017   32318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-350101
I1213 18:27:25.526767   32318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-350101/id_rsa Username:docker}
I1213 18:27:25.651324   32318 build_images.go:162] Building image from path: /tmp/build.5471273.tar
I1213 18:27:25.651410   32318 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 18:27:25.659393   32318 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.5471273.tar
I1213 18:27:25.664632   32318 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.5471273.tar: stat -c "%s %y" /var/lib/minikube/build/build.5471273.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.5471273.tar': No such file or directory
I1213 18:27:25.664693   32318 ssh_runner.go:362] scp /tmp/build.5471273.tar --> /var/lib/minikube/build/build.5471273.tar (3072 bytes)
I1213 18:27:25.689955   32318 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.5471273
I1213 18:27:25.698087   32318 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.5471273 -xf /var/lib/minikube/build/build.5471273.tar
I1213 18:27:25.706331   32318 crio.go:315] Building image: /var/lib/minikube/build/build.5471273
I1213 18:27:25.706403   32318 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-350101 /var/lib/minikube/build/build.5471273 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1213 18:27:28.762825   32318 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-350101 /var/lib/minikube/build/build.5471273 --cgroup-manager=cgroupfs: (3.056394343s)
I1213 18:27:28.762889   32318 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.5471273
I1213 18:27:28.770824   32318 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.5471273.tar
I1213 18:27:28.778667   32318 build_images.go:218] Built localhost/my-image:functional-350101 from /tmp/build.5471273.tar
I1213 18:27:28.778703   32318 build_images.go:134] succeeded building to: functional-350101
I1213 18:27:28.778717   32318 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image ls
E1213 18:27:28.796990    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-350101
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image load --daemon kicbase/echo-server:functional-350101 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image ls
2025/12/13 18:27:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image load --daemon kicbase/echo-server:functional-350101 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-350101
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image load --daemon kicbase/echo-server:functional-350101 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image save kicbase/echo-server:functional-350101 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image rm kicbase/echo-server:functional-350101 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-350101
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-350101 image save --daemon kicbase/echo-server:functional-350101 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-350101
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-350101
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-350101
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-350101
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22122-2686/.minikube/files/etc/test/nested/copy/4637/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-752103 cache add registry.k8s.io/pause:3.1: (1.161543238s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-752103 cache add registry.k8s.io/pause:3.3: (1.158512061s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-752103 cache add registry.k8s.io/pause:latest: (1.12911621s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1173006151/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 cache add minikube-local-cache-test:functional-752103
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 cache delete minikube-local-cache-test:functional-752103
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-752103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.521096ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs864268042/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 config get cpus: exit status 14 (64.294844ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 config get cpus: exit status 14 (64.462772ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-752103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1213 18:56:42.459034    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-752103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (223.789162ms)

                                                
                                                
-- stdout --
	* [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:56:42.458840   61723 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:56:42.458965   61723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:56:42.458970   61723 out.go:374] Setting ErrFile to fd 2...
	I1213 18:56:42.458975   61723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:56:42.459324   61723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:56:42.459870   61723 out.go:368] Setting JSON to false
	I1213 18:56:42.460668   61723 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5955,"bootTime":1765646248,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:56:42.460736   61723 start.go:143] virtualization:  
	I1213 18:56:42.464200   61723 out.go:179] * [functional-752103] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 18:56:42.467864   61723 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:56:42.467995   61723 notify.go:221] Checking for updates...
	I1213 18:56:42.473754   61723 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:56:42.476711   61723 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:56:42.479628   61723 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:56:42.482559   61723 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:56:42.485377   61723 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:56:42.488710   61723 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:56:42.489344   61723 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:56:42.514814   61723 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:56:42.514989   61723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:56:42.579283   61723 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:56:42.569434498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:56:42.579402   61723 docker.go:319] overlay module found
	I1213 18:56:42.582620   61723 out.go:179] * Using the docker driver based on existing profile
	I1213 18:56:42.585517   61723 start.go:309] selected driver: docker
	I1213 18:56:42.585544   61723 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:56:42.585658   61723 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:56:42.589298   61723 out.go:203] 
	W1213 18:56:42.592433   61723 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 18:56:42.595278   61723 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-752103 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-752103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-752103 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (202.783936ms)

                                                
                                                
-- stdout --
	* [functional-752103] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 18:56:51.866136   63710 out.go:360] Setting OutFile to fd 1 ...
	I1213 18:56:51.866293   63710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:56:51.866306   63710 out.go:374] Setting ErrFile to fd 2...
	I1213 18:56:51.866312   63710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 18:56:51.866680   63710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 18:56:51.867086   63710 out.go:368] Setting JSON to false
	I1213 18:56:51.867979   63710 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5964,"bootTime":1765646248,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1213 18:56:51.868051   63710 start.go:143] virtualization:  
	I1213 18:56:51.873271   63710 out.go:179] * [functional-752103] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 18:56:51.876287   63710 out.go:179]   - MINIKUBE_LOCATION=22122
	I1213 18:56:51.876379   63710 notify.go:221] Checking for updates...
	I1213 18:56:51.882774   63710 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 18:56:51.885834   63710 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	I1213 18:56:51.888894   63710 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	I1213 18:56:51.891868   63710 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 18:56:51.894781   63710 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 18:56:51.898170   63710 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 18:56:51.898807   63710 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 18:56:51.935030   63710 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 18:56:51.935207   63710 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 18:56:51.996198   63710 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 18:56:51.98661626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 18:56:51.997457   63710 docker.go:319] overlay module found
	I1213 18:56:52.001965   63710 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 18:56:52.005039   63710 start.go:309] selected driver: docker
	I1213 18:56:52.005084   63710 start.go:927] validating driver "docker" against &{Name:functional-752103 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-752103 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 18:56:52.005188   63710 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 18:56:52.008846   63710 out.go:203] 
	W1213 18:56:52.011880   63710 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 18:56:52.014872   63710 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh -n functional-752103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 cp functional-752103:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp233435850/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh -n functional-752103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh -n functional-752103 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4637/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "sudo cat /etc/test/nested/copy/4637/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4637.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "sudo cat /etc/ssl/certs/4637.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4637.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "sudo cat /usr/share/ca-certificates/4637.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/46372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "sudo cat /etc/ssl/certs/46372.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/46372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "sudo cat /usr/share/ca-certificates/46372.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 ssh "sudo systemctl is-active docker": exit status 1 (342.639833ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 ssh "sudo systemctl is-active containerd": exit status 1 (336.084502ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-752103 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-752103
localhost/kicbase/echo-server:functional-752103
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-752103 image ls --format short --alsologtostderr:
I1213 18:56:54.835660   64354 out.go:360] Setting OutFile to fd 1 ...
I1213 18:56:54.835861   64354 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:54.835888   64354 out.go:374] Setting ErrFile to fd 2...
I1213 18:56:54.835908   64354 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:54.836342   64354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
I1213 18:56:54.837361   64354 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 18:56:54.837563   64354 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 18:56:54.838572   64354 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
I1213 18:56:54.855876   64354 ssh_runner.go:195] Run: systemctl --version
I1213 18:56:54.855935   64354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
I1213 18:56:54.875000   64354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
I1213 18:56:54.979739   64354 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-752103 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ localhost/kicbase/echo-server           │ functional-752103  │ ce2d2cda2d858 │ 4.79MB │
│ localhost/my-image                      │ functional-752103  │ 7e0cbc994a655 │ 1.64MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ localhost/minikube-local-cache-test     │ functional-752103  │ 26cfd7911405e │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-752103 image ls --format table --alsologtostderr:
I1213 18:56:59.347543   64845 out.go:360] Setting OutFile to fd 1 ...
I1213 18:56:59.347729   64845 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:59.347759   64845 out.go:374] Setting ErrFile to fd 2...
I1213 18:56:59.347780   64845 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:59.348046   64845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
I1213 18:56:59.348681   64845 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 18:56:59.348845   64845 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 18:56:59.349388   64845 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
I1213 18:56:59.367225   64845 ssh_runner.go:195] Run: systemctl --version
I1213 18:56:59.367275   64845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
I1213 18:56:59.384569   64845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
I1213 18:56:59.487508   64845 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-752103 image ls --format json --alsologtostderr:
[{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-752103"],"size":"4788229"},{"id":"26cfd7911405e248b68801d433969048d3a4887978c11dfc7449a9513f160a82","repoDigests":["localhost/minikube-local-cache-test@sha256:d413e43cea7bcd8a6f3b714a871727dee011e561a0ab319fa3ba1181eaf6d026"],"repoTags":["localhost/minikube-local-cache-test:functional-752103"],"size":"3330"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDiges
ts":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"7e0cbc994a65590efb2f125791e75ee5c70782507035561c5abd0bc4ab3433d5","repoDigests":["localhost/my-image@sha256:dd7fe78417bf480973ea1eea154c4ee3b948e6e0549bc68dce7a0db4887d235d"],"repoTags":["localhost/my-image:functional-752103"],"size":"1640791"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49822549"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede
00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74106775"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a8
53555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84949999"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDige
sts":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"158cc2f8e47ffe3adca43b7a1fcb1ce1948a3a998766365aad55bad893798486","repoDigests":["docker.io/library/2c2ee104a2fb8f29d631519235ddf4f21791df239af18b7a7d61e84d11ad4951-tmp@sha256:469c0389a3f74eb9001ba75bf868a37
43ee254d5b80436d92c6becc32dc9d915"],"repoTags":[],"size":"1638179"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-752103 image ls --format json --alsologtostderr:
I1213 18:56:59.126473   64810 out.go:360] Setting OutFile to fd 1 ...
I1213 18:56:59.126636   64810 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:59.126645   64810 out.go:374] Setting ErrFile to fd 2...
I1213 18:56:59.126651   64810 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:59.126893   64810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
I1213 18:56:59.127474   64810 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 18:56:59.127598   64810 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 18:56:59.128129   64810 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
I1213 18:56:59.144919   64810 ssh_runner.go:195] Run: systemctl --version
I1213 18:56:59.144973   64810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
I1213 18:56:59.162153   64810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
I1213 18:56:59.263471   64810 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-752103 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-752103
size: "4788229"
- id: 26cfd7911405e248b68801d433969048d3a4887978c11dfc7449a9513f160a82
repoDigests:
- localhost/minikube-local-cache-test@sha256:d413e43cea7bcd8a6f3b714a871727dee011e561a0ab319fa3ba1181eaf6d026
repoTags:
- localhost/minikube-local-cache-test:functional-752103
size: "3330"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-752103 image ls --format yaml --alsologtostderr:
I1213 18:56:55.076611   64391 out.go:360] Setting OutFile to fd 1 ...
I1213 18:56:55.076833   64391 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:55.076861   64391 out.go:374] Setting ErrFile to fd 2...
I1213 18:56:55.076879   64391 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:55.077320   64391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
I1213 18:56:55.078542   64391 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 18:56:55.078766   64391 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 18:56:55.079530   64391 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
I1213 18:56:55.102309   64391 ssh_runner.go:195] Run: systemctl --version
I1213 18:56:55.102361   64391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
I1213 18:56:55.126519   64391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
I1213 18:56:55.231540   64391 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 ssh pgrep buildkitd: exit status 1 (259.754388ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image build -t localhost/my-image:functional-752103 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-752103 image build -t localhost/my-image:functional-752103 testdata/build --alsologtostderr: (3.30843285s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-752103 image build -t localhost/my-image:functional-752103 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 158cc2f8e47
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-752103
--> 7e0cbc994a6
Successfully tagged localhost/my-image:functional-752103
7e0cbc994a65590efb2f125791e75ee5c70782507035561c5abd0bc4ab3433d5
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-752103 image build -t localhost/my-image:functional-752103 testdata/build --alsologtostderr:
I1213 18:56:55.577516   64494 out.go:360] Setting OutFile to fd 1 ...
I1213 18:56:55.577697   64494 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:55.577723   64494 out.go:374] Setting ErrFile to fd 2...
I1213 18:56:55.577743   64494 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 18:56:55.578014   64494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
I1213 18:56:55.578646   64494 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 18:56:55.579306   64494 config.go:182] Loaded profile config "functional-752103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 18:56:55.579881   64494 cli_runner.go:164] Run: docker container inspect functional-752103 --format={{.State.Status}}
I1213 18:56:55.596628   64494 ssh_runner.go:195] Run: systemctl --version
I1213 18:56:55.596683   64494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-752103
I1213 18:56:55.613900   64494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/functional-752103/id_rsa Username:docker}
I1213 18:56:55.715522   64494 build_images.go:162] Building image from path: /tmp/build.1059753095.tar
I1213 18:56:55.715622   64494 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 18:56:55.723385   64494 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1059753095.tar
I1213 18:56:55.727005   64494 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1059753095.tar: stat -c "%s %y" /var/lib/minikube/build/build.1059753095.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1059753095.tar': No such file or directory
I1213 18:56:55.727071   64494 ssh_runner.go:362] scp /tmp/build.1059753095.tar --> /var/lib/minikube/build/build.1059753095.tar (3072 bytes)
I1213 18:56:55.744436   64494 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1059753095
I1213 18:56:55.752141   64494 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1059753095 -xf /var/lib/minikube/build/build.1059753095.tar
I1213 18:56:55.760261   64494 crio.go:315] Building image: /var/lib/minikube/build/build.1059753095
I1213 18:56:55.760328   64494 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-752103 /var/lib/minikube/build/build.1059753095 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1213 18:56:58.809583   64494 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-752103 /var/lib/minikube/build/build.1059753095 --cgroup-manager=cgroupfs: (3.049210634s)
I1213 18:56:58.809677   64494 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1059753095
I1213 18:56:58.817874   64494 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1059753095.tar
I1213 18:56:58.826020   64494 build_images.go:218] Built localhost/my-image:functional-752103 from /tmp/build.1059753095.tar
I1213 18:56:58.826052   64494 build_images.go:134] succeeded building to: functional-752103
I1213 18:56:58.826062   64494 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-752103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image load --daemon kicbase/echo-server:functional-752103 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-752103 image load --daemon kicbase/echo-server:functional-752103 --alsologtostderr: (1.22871924s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image load --daemon kicbase/echo-server:functional-752103 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-752103
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image load --daemon kicbase/echo-server:functional-752103 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image save kicbase/echo-server:functional-752103 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image rm kicbase/echo-server:functional-752103 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-752103
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 image save --daemon kicbase/echo-server:functional-752103 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-752103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "494.73407ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
E1213 18:54:44.927757    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1344: Took "65.352453ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "632.953218ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "143.465957ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-752103 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-752103 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3505430281/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (330.13517ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 18:56:45.911966    4637 retry.go:31] will retry after 483.955551ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3505430281/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 ssh "sudo umount -f /mount-9p": exit status 1 (280.17043ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-752103 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3505430281/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-752103 ssh "findmnt -T" /mount1: exit status 1 (548.672166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 18:56:47.973974    4637 retry.go:31] will retry after 667.950113ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-752103 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-752103 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-752103 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2250222461/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-752103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-752103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-752103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (202.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1213 18:59:44.921207    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:59:45.767259    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:59:45.773618    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:59:45.784945    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:59:45.806326    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:59:45.847694    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:59:45.929150    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:59:46.090617    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:59:46.412253    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:59:47.054232    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:59:48.335550    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:59:50.897136    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 18:59:56.018453    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:00:06.260531    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:00:26.741811    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:01:07.703676    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:01:42.459632    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m22.049607641s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (202.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 kubectl -- rollout status deployment/busybox: (3.913190991s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-gqp98 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-h5qqv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-rgrxn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-gqp98 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-h5qqv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-rgrxn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-gqp98 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-h5qqv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-rgrxn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-gqp98 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-gqp98 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-h5qqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-h5qqv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-rgrxn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 kubectl -- exec busybox-7b57f96db7-rgrxn -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 node add --alsologtostderr -v 5
E1213 19:02:29.627112    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 node add --alsologtostderr -v 5: (1m0.597428948s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 status --alsologtostderr -v 5: (1.11595769s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-605114 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.064310929s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 status --output json --alsologtostderr -v 5: (1.052054813s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp testdata/cp-test.txt ha-605114:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1407969839/001/cp-test_ha-605114.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114:/home/docker/cp-test.txt ha-605114-m02:/home/docker/cp-test_ha-605114_ha-605114-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m02 "sudo cat /home/docker/cp-test_ha-605114_ha-605114-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114:/home/docker/cp-test.txt ha-605114-m03:/home/docker/cp-test_ha-605114_ha-605114-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m03 "sudo cat /home/docker/cp-test_ha-605114_ha-605114-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114:/home/docker/cp-test.txt ha-605114-m04:/home/docker/cp-test_ha-605114_ha-605114-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m04 "sudo cat /home/docker/cp-test_ha-605114_ha-605114-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp testdata/cp-test.txt ha-605114-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1407969839/001/cp-test_ha-605114-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114-m02:/home/docker/cp-test.txt ha-605114:/home/docker/cp-test_ha-605114-m02_ha-605114.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114 "sudo cat /home/docker/cp-test_ha-605114-m02_ha-605114.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114-m02:/home/docker/cp-test.txt ha-605114-m03:/home/docker/cp-test_ha-605114-m02_ha-605114-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m03 "sudo cat /home/docker/cp-test_ha-605114-m02_ha-605114-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114-m02:/home/docker/cp-test.txt ha-605114-m04:/home/docker/cp-test_ha-605114-m02_ha-605114-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m04 "sudo cat /home/docker/cp-test_ha-605114-m02_ha-605114-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp testdata/cp-test.txt ha-605114-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1407969839/001/cp-test_ha-605114-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114-m03:/home/docker/cp-test.txt ha-605114:/home/docker/cp-test_ha-605114-m03_ha-605114.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114 "sudo cat /home/docker/cp-test_ha-605114-m03_ha-605114.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114-m03:/home/docker/cp-test.txt ha-605114-m02:/home/docker/cp-test_ha-605114-m03_ha-605114-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m02 "sudo cat /home/docker/cp-test_ha-605114-m03_ha-605114-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114-m03:/home/docker/cp-test.txt ha-605114-m04:/home/docker/cp-test_ha-605114-m03_ha-605114-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m04 "sudo cat /home/docker/cp-test_ha-605114-m03_ha-605114-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp testdata/cp-test.txt ha-605114-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1407969839/001/cp-test_ha-605114-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114:/home/docker/cp-test_ha-605114-m04_ha-605114.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114 "sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114-m02:/home/docker/cp-test_ha-605114-m04_ha-605114-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m02 "sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 cp ha-605114-m04:/home/docker/cp-test.txt ha-605114-m03:/home/docker/cp-test_ha-605114-m04_ha-605114-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 ssh -n ha-605114-m03 "sudo cat /home/docker/cp-test_ha-605114-m04_ha-605114-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 node stop m02 --alsologtostderr -v 5: (12.225468569s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-605114 status --alsologtostderr -v 5: exit status 7 (780.236831ms)

                                                
                                                
-- stdout --
	ha-605114
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-605114-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-605114-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-605114-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:03:56.064820   80607 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:03:56.064958   80607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:03:56.064968   80607 out.go:374] Setting ErrFile to fd 2...
	I1213 19:03:56.064974   80607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:03:56.065269   80607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:03:56.065463   80607 out.go:368] Setting JSON to false
	I1213 19:03:56.065506   80607 mustload.go:66] Loading cluster: ha-605114
	I1213 19:03:56.065581   80607 notify.go:221] Checking for updates...
	I1213 19:03:56.066846   80607 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:03:56.066878   80607 status.go:174] checking status of ha-605114 ...
	I1213 19:03:56.067608   80607 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:03:56.091762   80607 status.go:371] ha-605114 host status = "Running" (err=<nil>)
	I1213 19:03:56.091812   80607 host.go:66] Checking if "ha-605114" exists ...
	I1213 19:03:56.092170   80607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114
	I1213 19:03:56.128238   80607 host.go:66] Checking if "ha-605114" exists ...
	I1213 19:03:56.128566   80607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:03:56.128621   80607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114
	I1213 19:03:56.149420   80607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114/id_rsa Username:docker}
	I1213 19:03:56.254544   80607 ssh_runner.go:195] Run: systemctl --version
	I1213 19:03:56.261658   80607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:03:56.274627   80607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:03:56.337207   80607 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-13 19:03:56.327381753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:03:56.337743   80607 kubeconfig.go:125] found "ha-605114" server: "https://192.168.49.254:8443"
	I1213 19:03:56.337783   80607 api_server.go:166] Checking apiserver status ...
	I1213 19:03:56.337828   80607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:03:56.350716   80607 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1257/cgroup
	I1213 19:03:56.359524   80607 api_server.go:182] apiserver freezer: "6:freezer:/docker/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/crio/crio-15458f8937b2221c30519735b675703a4519129d7e16b477442e647635f85791"
	I1213 19:03:56.359594   80607 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b8b77eca4604af1b6af60bca70543cf68142cce11359af7814880328fa73eb01/crio/crio-15458f8937b2221c30519735b675703a4519129d7e16b477442e647635f85791/freezer.state
	I1213 19:03:56.368017   80607 api_server.go:204] freezer state: "THAWED"
	I1213 19:03:56.368047   80607 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 19:03:56.376216   80607 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 19:03:56.376251   80607 status.go:463] ha-605114 apiserver status = Running (err=<nil>)
	I1213 19:03:56.376262   80607 status.go:176] ha-605114 status: &{Name:ha-605114 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:03:56.376287   80607 status.go:174] checking status of ha-605114-m02 ...
	I1213 19:03:56.376583   80607 cli_runner.go:164] Run: docker container inspect ha-605114-m02 --format={{.State.Status}}
	I1213 19:03:56.394398   80607 status.go:371] ha-605114-m02 host status = "Stopped" (err=<nil>)
	I1213 19:03:56.394423   80607 status.go:384] host is not running, skipping remaining checks
	I1213 19:03:56.394430   80607 status.go:176] ha-605114-m02 status: &{Name:ha-605114-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:03:56.394451   80607 status.go:174] checking status of ha-605114-m03 ...
	I1213 19:03:56.394768   80607 cli_runner.go:164] Run: docker container inspect ha-605114-m03 --format={{.State.Status}}
	I1213 19:03:56.411550   80607 status.go:371] ha-605114-m03 host status = "Running" (err=<nil>)
	I1213 19:03:56.411577   80607 host.go:66] Checking if "ha-605114-m03" exists ...
	I1213 19:03:56.411923   80607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m03
	I1213 19:03:56.430455   80607 host.go:66] Checking if "ha-605114-m03" exists ...
	I1213 19:03:56.430760   80607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:03:56.430795   80607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m03
	I1213 19:03:56.448614   80607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m03/id_rsa Username:docker}
	I1213 19:03:56.555178   80607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:03:56.568862   80607 kubeconfig.go:125] found "ha-605114" server: "https://192.168.49.254:8443"
	I1213 19:03:56.568900   80607 api_server.go:166] Checking apiserver status ...
	I1213 19:03:56.568943   80607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:03:56.580880   80607 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1194/cgroup
	I1213 19:03:56.592004   80607 api_server.go:182] apiserver freezer: "6:freezer:/docker/72290e9293dd00db9e8e8bc85de9b499c8935c0640b2147abc376701c7619dce/crio/crio-1782f0f1f34fb034109bc69bc8ce162d49e8555b073405267e8acc600834249b"
	I1213 19:03:56.592135   80607 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/72290e9293dd00db9e8e8bc85de9b499c8935c0640b2147abc376701c7619dce/crio/crio-1782f0f1f34fb034109bc69bc8ce162d49e8555b073405267e8acc600834249b/freezer.state
	I1213 19:03:56.600865   80607 api_server.go:204] freezer state: "THAWED"
	I1213 19:03:56.600894   80607 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 19:03:56.609476   80607 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 19:03:56.609503   80607 status.go:463] ha-605114-m03 apiserver status = Running (err=<nil>)
	I1213 19:03:56.609513   80607 status.go:176] ha-605114-m03 status: &{Name:ha-605114-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:03:56.609538   80607 status.go:174] checking status of ha-605114-m04 ...
	I1213 19:03:56.609851   80607 cli_runner.go:164] Run: docker container inspect ha-605114-m04 --format={{.State.Status}}
	I1213 19:03:56.630849   80607 status.go:371] ha-605114-m04 host status = "Running" (err=<nil>)
	I1213 19:03:56.630874   80607 host.go:66] Checking if "ha-605114-m04" exists ...
	I1213 19:03:56.631182   80607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605114-m04
	I1213 19:03:56.650031   80607 host.go:66] Checking if "ha-605114-m04" exists ...
	I1213 19:03:56.650332   80607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:03:56.650382   80607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605114-m04
	I1213 19:03:56.673079   80607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/ha-605114-m04/id_rsa Username:docker}
	I1213 19:03:56.778622   80607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:03:56.792325   80607 status.go:176] ha-605114-m04 status: &{Name:ha-605114-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 node start m02 --alsologtostderr -v 5: (29.909156152s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 status --alsologtostderr -v 5: (1.280222644s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.352128471s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (147.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 stop --alsologtostderr -v 5
E1213 19:04:44.920894    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:04:45.533524    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:04:45.767116    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 stop --alsologtostderr -v 5: (37.80082084s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 start --wait true --alsologtostderr -v 5
E1213 19:05:13.469290    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:06:42.459599    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 start --wait true --alsologtostderr -v 5: (1m49.999629398s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (147.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 node delete m03 --alsologtostderr -v 5: (11.084195504s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-605114 stop --alsologtostderr -v 5: (36.070962296s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-605114 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-605114 status --alsologtostderr -v 5: exit status 7 (103.569708ms)

                                                
                                                
-- stdout --
	ha-605114
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-605114-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-605114-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:07:47.243077   92896 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:07:47.243193   92896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:07:47.243205   92896 out.go:374] Setting ErrFile to fd 2...
	I1213 19:07:47.243210   92896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:07:47.243451   92896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:07:47.243631   92896 out.go:368] Setting JSON to false
	I1213 19:07:47.243671   92896 mustload.go:66] Loading cluster: ha-605114
	I1213 19:07:47.243747   92896 notify.go:221] Checking for updates...
	I1213 19:07:47.244976   92896 config.go:182] Loaded profile config "ha-605114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:07:47.245035   92896 status.go:174] checking status of ha-605114 ...
	I1213 19:07:47.245895   92896 cli_runner.go:164] Run: docker container inspect ha-605114 --format={{.State.Status}}
	I1213 19:07:47.263416   92896 status.go:371] ha-605114 host status = "Stopped" (err=<nil>)
	I1213 19:07:47.263439   92896 status.go:384] host is not running, skipping remaining checks
	I1213 19:07:47.263446   92896 status.go:176] ha-605114 status: &{Name:ha-605114 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:07:47.263472   92896 status.go:174] checking status of ha-605114-m02 ...
	I1213 19:07:47.263781   92896 cli_runner.go:164] Run: docker container inspect ha-605114-m02 --format={{.State.Status}}
	I1213 19:07:47.281409   92896 status.go:371] ha-605114-m02 host status = "Stopped" (err=<nil>)
	I1213 19:07:47.281438   92896 status.go:384] host is not running, skipping remaining checks
	I1213 19:07:47.281445   92896 status.go:176] ha-605114-m02 status: &{Name:ha-605114-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:07:47.281464   92896 status.go:174] checking status of ha-605114-m04 ...
	I1213 19:07:47.281749   92896 cli_runner.go:164] Run: docker container inspect ha-605114-m04 --format={{.State.Status}}
	I1213 19:07:47.296307   92896 status.go:371] ha-605114-m04 host status = "Stopped" (err=<nil>)
	I1213 19:07:47.296330   92896 status.go:384] host is not running, skipping remaining checks
	I1213 19:07:47.296337   92896 status.go:176] ha-605114-m04 status: &{Name:ha-605114-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.17s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.03s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-981625 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-981625 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m22.025425239s)
--- PASS: TestJSONOutput/start/Command (82.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-981625 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-981625 --output=json --user=testUser: (5.864123475s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-588902 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-588902 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (103.98876ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a8fd61e6-215c-4626-8b35-c9b338df1cbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-588902] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e9e2793-152b-4fbe-9a65-8ed8acb28a50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22122"}}
	{"specversion":"1.0","id":"46338979-e48f-492a-acc0-5fa5e3248097","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a58b80ad-ec55-4f80-bc2b-089731f8f53c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig"}}
	{"specversion":"1.0","id":"5909d1bc-c97b-43e6-9773-64f2581dca09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube"}}
	{"specversion":"1.0","id":"4fd6547e-f32c-403c-9799-88fbf9662562","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"816b009a-ce6f-4443-beaf-20c8e97c23a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"13594aa0-4158-45fc-8263-c84bfc043524","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-588902" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-588902
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-827527 --network=
E1213 19:19:44.921195    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:19:45.767248    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-827527 --network=: (40.209276692s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-827527" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-827527
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-827527: (2.223067971s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.46s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.52s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-929133 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-929133 --network=bridge: (33.37130662s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-929133" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-929133
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-929133: (2.124330487s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.52s)

                                                
                                    
x
+
TestKicExistingNetwork (33.27s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1213 19:20:37.026577    4637 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 19:20:37.043082    4637 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 19:20:37.043157    4637 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1213 19:20:37.043175    4637 cli_runner.go:164] Run: docker network inspect existing-network
W1213 19:20:37.064228    4637 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1213 19:20:37.064264    4637 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1213 19:20:37.064284    4637 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1213 19:20:37.064442    4637 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 19:20:37.082191    4637 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a2f3617b1da5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ee:bd:c1:14:a9:f1} reservation:<nil>}
I1213 19:20:37.082561    4637 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001663210}
I1213 19:20:37.082591    4637 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1213 19:20:37.082644    4637 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1213 19:20:37.144716    4637 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-742147 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-742147 --network=existing-network: (31.017818677s)
helpers_test.go:176: Cleaning up "existing-network-742147" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-742147
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-742147: (2.099473312s)
I1213 19:21:10.278296    4637 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.27s)

                                                
                                    
x
+
TestKicCustomSubnet (35.84s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-934953 --subnet=192.168.60.0/24
E1213 19:21:25.534856    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:21:42.459371    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-934953 --subnet=192.168.60.0/24: (33.49008638s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-934953 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-934953" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-934953
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-934953: (2.326409651s)
--- PASS: TestKicCustomSubnet (35.84s)

                                                
                                    
x
+
TestKicStaticIP (34.2s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-686220 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-686220 --static-ip=192.168.200.200: (31.744689663s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-686220 ip
helpers_test.go:176: Cleaning up "static-ip-686220" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-686220
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-686220: (2.288726496s)
--- PASS: TestKicStaticIP (34.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-266224 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-266224 --driver=docker  --container-runtime=crio: (32.157967192s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-268817 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-268817 --driver=docker  --container-runtime=crio: (32.906256595s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-266224
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-268817
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-268817" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-268817
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-268817: (2.087209364s)
helpers_test.go:176: Cleaning up "first-266224" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-266224
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-266224: (2.080628802s)
--- PASS: TestMinikubeProfile (70.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-096365 --memory=3072 --mount-string /tmp/TestMountStartserial3271361826/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-096365 --memory=3072 --mount-string /tmp/TestMountStartserial3271361826/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.442069456s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-096365 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-098227 --memory=3072 --mount-string /tmp/TestMountStartserial3271361826/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-098227 --memory=3072 --mount-string /tmp/TestMountStartserial3271361826/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.131386045s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-098227 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-096365 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-096365 --alsologtostderr -v=5: (1.696464769s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-098227 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-098227
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-098227: (1.325139498s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.33s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-098227
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-098227: (7.328575112s)
--- PASS: TestMountStart/serial/RestartStopped (8.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-098227 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-905631 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1213 19:24:44.920528    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:24:45.767281    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-905631 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m16.960296466s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-905631 -- rollout status deployment/busybox: (3.028122955s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- exec busybox-7b57f96db7-5tk9z -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- exec busybox-7b57f96db7-jcq77 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- exec busybox-7b57f96db7-5tk9z -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- exec busybox-7b57f96db7-jcq77 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- exec busybox-7b57f96db7-5tk9z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- exec busybox-7b57f96db7-jcq77 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- exec busybox-7b57f96db7-5tk9z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- exec busybox-7b57f96db7-5tk9z -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- exec busybox-7b57f96db7-jcq77 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-905631 -- exec busybox-7b57f96db7-jcq77 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-905631 -v=5 --alsologtostderr
E1213 19:26:42.459295    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-905631 -v=5 --alsologtostderr: (57.224673877s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.93s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-905631 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 cp testdata/cp-test.txt multinode-905631:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 cp multinode-905631:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2627287527/001/cp-test_multinode-905631.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 cp multinode-905631:/home/docker/cp-test.txt multinode-905631-m02:/home/docker/cp-test_multinode-905631_multinode-905631-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631-m02 "sudo cat /home/docker/cp-test_multinode-905631_multinode-905631-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 cp multinode-905631:/home/docker/cp-test.txt multinode-905631-m03:/home/docker/cp-test_multinode-905631_multinode-905631-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631-m03 "sudo cat /home/docker/cp-test_multinode-905631_multinode-905631-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 cp testdata/cp-test.txt multinode-905631-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 cp multinode-905631-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2627287527/001/cp-test_multinode-905631-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 cp multinode-905631-m02:/home/docker/cp-test.txt multinode-905631:/home/docker/cp-test_multinode-905631-m02_multinode-905631.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631 "sudo cat /home/docker/cp-test_multinode-905631-m02_multinode-905631.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 cp multinode-905631-m02:/home/docker/cp-test.txt multinode-905631-m03:/home/docker/cp-test_multinode-905631-m02_multinode-905631-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631-m03 "sudo cat /home/docker/cp-test_multinode-905631-m02_multinode-905631-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 cp testdata/cp-test.txt multinode-905631-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 cp multinode-905631-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2627287527/001/cp-test_multinode-905631-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 cp multinode-905631-m03:/home/docker/cp-test.txt multinode-905631:/home/docker/cp-test_multinode-905631-m03_multinode-905631.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631 "sudo cat /home/docker/cp-test_multinode-905631-m03_multinode-905631.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 cp multinode-905631-m03:/home/docker/cp-test.txt multinode-905631-m02:/home/docker/cp-test_multinode-905631-m03_multinode-905631-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 ssh -n multinode-905631-m02 "sudo cat /home/docker/cp-test_multinode-905631-m03_multinode-905631-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-905631 node stop m03: (1.337676064s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-905631 status: exit status 7 (545.167431ms)

                                                
                                                
-- stdout --
	multinode-905631
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-905631-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-905631-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-905631 status --alsologtostderr: exit status 7 (575.779429ms)

                                                
                                                
-- stdout --
	multinode-905631
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-905631-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-905631-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:27:38.632066  156504 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:27:38.632184  156504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:27:38.632195  156504 out.go:374] Setting ErrFile to fd 2...
	I1213 19:27:38.632200  156504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:27:38.632461  156504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:27:38.632652  156504 out.go:368] Setting JSON to false
	I1213 19:27:38.632696  156504 mustload.go:66] Loading cluster: multinode-905631
	I1213 19:27:38.632771  156504 notify.go:221] Checking for updates...
	I1213 19:27:38.634023  156504 config.go:182] Loaded profile config "multinode-905631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:27:38.634057  156504 status.go:174] checking status of multinode-905631 ...
	I1213 19:27:38.636920  156504 cli_runner.go:164] Run: docker container inspect multinode-905631 --format={{.State.Status}}
	I1213 19:27:38.655354  156504 status.go:371] multinode-905631 host status = "Running" (err=<nil>)
	I1213 19:27:38.655377  156504 host.go:66] Checking if "multinode-905631" exists ...
	I1213 19:27:38.655680  156504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-905631
	I1213 19:27:38.686610  156504 host.go:66] Checking if "multinode-905631" exists ...
	I1213 19:27:38.686942  156504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:27:38.686988  156504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-905631
	I1213 19:27:38.705749  156504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/multinode-905631/id_rsa Username:docker}
	I1213 19:27:38.814805  156504 ssh_runner.go:195] Run: systemctl --version
	I1213 19:27:38.821383  156504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:27:38.834433  156504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:27:38.896399  156504 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 19:27:38.886459215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 19:27:38.896974  156504 kubeconfig.go:125] found "multinode-905631" server: "https://192.168.67.2:8443"
	I1213 19:27:38.896998  156504 api_server.go:166] Checking apiserver status ...
	I1213 19:27:38.897083  156504 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:27:38.910086  156504 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1257/cgroup
	I1213 19:27:38.919757  156504 api_server.go:182] apiserver freezer: "6:freezer:/docker/03ba66c5710ee37ca87950ec16581b3fd836defa36144b1954ff9db908201f06/crio/crio-e315e43a828b72e17f56a8ce577580a6db6eadaff4721ff5d54a7ff5b5278158"
	I1213 19:27:38.919846  156504 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/03ba66c5710ee37ca87950ec16581b3fd836defa36144b1954ff9db908201f06/crio/crio-e315e43a828b72e17f56a8ce577580a6db6eadaff4721ff5d54a7ff5b5278158/freezer.state
	I1213 19:27:38.927581  156504 api_server.go:204] freezer state: "THAWED"
	I1213 19:27:38.927617  156504 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1213 19:27:38.936231  156504 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1213 19:27:38.936260  156504 status.go:463] multinode-905631 apiserver status = Running (err=<nil>)
	I1213 19:27:38.936271  156504 status.go:176] multinode-905631 status: &{Name:multinode-905631 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:27:38.936288  156504 status.go:174] checking status of multinode-905631-m02 ...
	I1213 19:27:38.936602  156504 cli_runner.go:164] Run: docker container inspect multinode-905631-m02 --format={{.State.Status}}
	I1213 19:27:38.960017  156504 status.go:371] multinode-905631-m02 host status = "Running" (err=<nil>)
	I1213 19:27:38.960041  156504 host.go:66] Checking if "multinode-905631-m02" exists ...
	I1213 19:27:38.960359  156504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-905631-m02
	I1213 19:27:38.978339  156504 host.go:66] Checking if "multinode-905631-m02" exists ...
	I1213 19:27:38.978714  156504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:27:38.978759  156504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-905631-m02
	I1213 19:27:38.998728  156504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22122-2686/.minikube/machines/multinode-905631-m02/id_rsa Username:docker}
	I1213 19:27:39.102774  156504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:27:39.117515  156504 status.go:176] multinode-905631-m02 status: &{Name:multinode-905631-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:27:39.117551  156504 status.go:174] checking status of multinode-905631-m03 ...
	I1213 19:27:39.117890  156504 cli_runner.go:164] Run: docker container inspect multinode-905631-m03 --format={{.State.Status}}
	I1213 19:27:39.135036  156504 status.go:371] multinode-905631-m03 host status = "Stopped" (err=<nil>)
	I1213 19:27:39.135061  156504 status.go:384] host is not running, skipping remaining checks
	I1213 19:27:39.135068  156504 status.go:176] multinode-905631-m03 status: &{Name:multinode-905631-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-905631 node start m03 -v=5 --alsologtostderr: (7.558727863s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-905631
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-905631
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-905631: (25.152344712s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-905631 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-905631 --wait=true -v=5 --alsologtostderr: (54.929855187s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-905631
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-905631 node delete m03: (4.994906694s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-905631 stop: (23.849149352s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-905631 status: exit status 7 (100.772965ms)

                                                
                                                
-- stdout --
	multinode-905631
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-905631-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-905631 status --alsologtostderr: exit status 7 (96.208687ms)

                                                
                                                
-- stdout --
	multinode-905631
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-905631-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:29:37.393980  164393 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:29:37.394098  164393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:29:37.394109  164393 out.go:374] Setting ErrFile to fd 2...
	I1213 19:29:37.394114  164393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:29:37.394355  164393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:29:37.394558  164393 out.go:368] Setting JSON to false
	I1213 19:29:37.394589  164393 mustload.go:66] Loading cluster: multinode-905631
	I1213 19:29:37.394640  164393 notify.go:221] Checking for updates...
	I1213 19:29:37.395005  164393 config.go:182] Loaded profile config "multinode-905631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:29:37.395028  164393 status.go:174] checking status of multinode-905631 ...
	I1213 19:29:37.395816  164393 cli_runner.go:164] Run: docker container inspect multinode-905631 --format={{.State.Status}}
	I1213 19:29:37.413929  164393 status.go:371] multinode-905631 host status = "Stopped" (err=<nil>)
	I1213 19:29:37.413951  164393 status.go:384] host is not running, skipping remaining checks
	I1213 19:29:37.413958  164393 status.go:176] multinode-905631 status: &{Name:multinode-905631 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:29:37.413981  164393 status.go:174] checking status of multinode-905631-m02 ...
	I1213 19:29:37.414305  164393 cli_runner.go:164] Run: docker container inspect multinode-905631-m02 --format={{.State.Status}}
	I1213 19:29:37.442361  164393 status.go:371] multinode-905631-m02 host status = "Stopped" (err=<nil>)
	I1213 19:29:37.442381  164393 status.go:384] host is not running, skipping remaining checks
	I1213 19:29:37.442388  164393 status.go:176] multinode-905631-m02 status: &{Name:multinode-905631-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-905631 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1213 19:29:44.921517    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:29:45.767154    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-905631 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (54.19191503s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-905631 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-905631
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-905631-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-905631-m02 --driver=docker  --container-runtime=crio: exit status 14 (103.992589ms)

                                                
                                                
-- stdout --
	* [multinode-905631-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-905631-m02' is duplicated with machine name 'multinode-905631-m02' in profile 'multinode-905631'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-905631-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-905631-m03 --driver=docker  --container-runtime=crio: (32.923615279s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-905631
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-905631: exit status 80 (330.373925ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-905631 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-905631-m03 already exists in multinode-905631-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-905631-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-905631-m03: (2.089724073s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.50s)

                                                
                                    
x
+
TestPreload (118.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-738725 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1213 19:31:42.459747    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-738725 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m1.523944631s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-738725 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-738725 image pull gcr.io/k8s-minikube/busybox: (2.255141783s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-738725
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-738725: (5.911975788s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-738725 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1213 19:32:48.835361    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-738725 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.185586692s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-738725 image list
helpers_test.go:176: Cleaning up "test-preload-738725" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-738725
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-738725: (2.441948803s)
--- PASS: TestPreload (118.56s)

                                                
                                    
x
+
TestScheduledStopUnix (107.68s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-706598 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-706598 --memory=3072 --driver=docker  --container-runtime=crio: (31.615260737s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-706598 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 19:33:42.396043  178547 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:33:42.396233  178547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:33:42.396258  178547 out.go:374] Setting ErrFile to fd 2...
	I1213 19:33:42.396280  178547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:33:42.396551  178547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:33:42.396826  178547 out.go:368] Setting JSON to false
	I1213 19:33:42.396984  178547 mustload.go:66] Loading cluster: scheduled-stop-706598
	I1213 19:33:42.397405  178547 config.go:182] Loaded profile config "scheduled-stop-706598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:33:42.397504  178547 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/config.json ...
	I1213 19:33:42.397734  178547 mustload.go:66] Loading cluster: scheduled-stop-706598
	I1213 19:33:42.397894  178547 config.go:182] Loaded profile config "scheduled-stop-706598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-706598 -n scheduled-stop-706598
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-706598 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 19:33:42.854193  178634 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:33:42.854303  178634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:33:42.854309  178634 out.go:374] Setting ErrFile to fd 2...
	I1213 19:33:42.854314  178634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:33:42.854669  178634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:33:42.854982  178634 out.go:368] Setting JSON to false
	I1213 19:33:42.856409  178634 daemonize_unix.go:73] killing process 178563 as it is an old scheduled stop
	I1213 19:33:42.856518  178634 mustload.go:66] Loading cluster: scheduled-stop-706598
	I1213 19:33:42.856909  178634 config.go:182] Loaded profile config "scheduled-stop-706598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:33:42.856984  178634 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/config.json ...
	I1213 19:33:42.860663  178634 mustload.go:66] Loading cluster: scheduled-stop-706598
	I1213 19:33:42.860833  178634 config.go:182] Loaded profile config "scheduled-stop-706598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1213 19:33:42.864829    4637 retry.go:31] will retry after 142.277µs: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.865433    4637 retry.go:31] will retry after 77.12µs: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.865938    4637 retry.go:31] will retry after 330.693µs: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.867018    4637 retry.go:31] will retry after 281.222µs: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.868123    4637 retry.go:31] will retry after 437.594µs: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.869235    4637 retry.go:31] will retry after 770.906µs: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.870303    4637 retry.go:31] will retry after 1.543444ms: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.872446    4637 retry.go:31] will retry after 949.68µs: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.873534    4637 retry.go:31] will retry after 2.082202ms: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.876652    4637 retry.go:31] will retry after 2.22724ms: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.879847    4637 retry.go:31] will retry after 2.893428ms: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.884293    4637 retry.go:31] will retry after 7.388387ms: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.892523    4637 retry.go:31] will retry after 16.913263ms: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.910183    4637 retry.go:31] will retry after 13.766811ms: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.924421    4637 retry.go:31] will retry after 20.583229ms: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
I1213 19:33:42.945642    4637 retry.go:31] will retry after 32.409407ms: open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-706598 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-706598 -n scheduled-stop-706598
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-706598
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-706598 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 19:34:08.784088  178991 out.go:360] Setting OutFile to fd 1 ...
	I1213 19:34:08.784210  178991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:34:08.784227  178991 out.go:374] Setting ErrFile to fd 2...
	I1213 19:34:08.784233  178991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 19:34:08.784507  178991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-2686/.minikube/bin
	I1213 19:34:08.784752  178991 out.go:368] Setting JSON to false
	I1213 19:34:08.784848  178991 mustload.go:66] Loading cluster: scheduled-stop-706598
	I1213 19:34:08.785256  178991 config.go:182] Loaded profile config "scheduled-stop-706598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 19:34:08.785339  178991 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/scheduled-stop-706598/config.json ...
	I1213 19:34:08.785536  178991 mustload.go:66] Loading cluster: scheduled-stop-706598
	I1213 19:34:08.785663  178991 config.go:182] Loaded profile config "scheduled-stop-706598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1213 19:34:44.921912    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:34:45.767283    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-706598
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-706598: exit status 7 (72.26726ms)

                                                
                                                
-- stdout --
	scheduled-stop-706598
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-706598 -n scheduled-stop-706598
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-706598 -n scheduled-stop-706598: exit status 7 (66.77231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-706598" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-706598
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-706598: (4.473610683s)
--- PASS: TestScheduledStopUnix (107.68s)

                                                
                                    
x
+
TestInsufficientStorage (13.15s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-779438 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-779438 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.573940275s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b147fb62-7cac-4f59-8e09-bdf3fced762c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-779438] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"701e9f4d-7e14-4e61-bc2c-9aef7df12f72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22122"}}
	{"specversion":"1.0","id":"1668bddc-16e5-4ba7-9cee-7e026fb0af99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5165cfb5-e3cf-494f-ab70-0333fb5141ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig"}}
	{"specversion":"1.0","id":"ab40b03e-0350-420f-a1e4-2f602db64a8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube"}}
	{"specversion":"1.0","id":"0830c4f0-15c9-41fd-929f-994ab23a5ad4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"cba9e878-7521-48e8-8be5-4bb81cb7a6bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"29cb4c2a-d8a5-4e04-840d-79c510fb1bd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ae29a836-5806-445a-ae36-7351925a305f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a7826742-5f2f-434a-b0b6-1aad1a3f2f72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b269015-50d3-420e-9d6c-b3a4ffd52d0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"822f7c43-aa84-4516-97c2-b1d480ea0e5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-779438\" primary control-plane node in \"insufficient-storage-779438\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2aa5de4e-fac2-4628-9675-4193252b4441","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8bf2c7f-73ac-42ee-aceb-49725191726b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4c5065f-a1dc-4f90-acf0-9716cc7b0ad6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-779438 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-779438 --output=json --layout=cluster: exit status 7 (296.976431ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-779438","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-779438","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 19:35:09.257721  180700 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-779438" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-779438 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-779438 --output=json --layout=cluster: exit status 7 (299.183745ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-779438","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-779438","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 19:35:09.560534  180767 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-779438" does not appear in /home/jenkins/minikube-integration/22122-2686/kubeconfig
	E1213 19:35:09.570408  180767 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/insufficient-storage-779438/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-779438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-779438
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-779438: (1.982331542s)
--- PASS: TestInsufficientStorage (13.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1626540702 start -p running-upgrade-947759 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1626540702 start -p running-upgrade-947759 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.670343935s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-947759 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-947759 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.843307162s)
helpers_test.go:176: Cleaning up "running-upgrade-947759" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-947759
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-947759: (2.165748445s)
--- PASS: TestRunningBinaryUpgrade (54.59s)

                                                
                                    
x
+
TestMissingContainerUpgrade (125.72s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3989502049 start -p missing-upgrade-208144 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3989502049 start -p missing-upgrade-208144 --memory=3072 --driver=docker  --container-runtime=crio: (1m5.816748806s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-208144
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-208144
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-208144 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-208144 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.0217173s)
helpers_test.go:176: Cleaning up "missing-upgrade-208144" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-208144
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-208144: (3.174335601s)
--- PASS: TestMissingContainerUpgrade (125.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-255151 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-255151 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (95.961068ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-255151] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22122
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22122-2686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-2686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-255151 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-255151 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (47.636155473s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-255151 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-255151 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-255151 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.368681141s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-255151 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-255151 status -o json: exit status 2 (425.392744ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-255151","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-255151
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-255151: (2.350393467s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-255151 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-255151 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.708752132s)
--- PASS: TestNoKubernetes/serial/Start (9.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22122-2686/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-255151 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-255151 "sudo systemctl is-active --quiet service kubelet": exit status 1 (383.225646ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (2.320553439s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-255151
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-255151: (1.294006967s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-255151 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-255151 --driver=docker  --container-runtime=crio: (7.128489337s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-255151 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-255151 "sudo systemctl is-active --quiet service kubelet": exit status 1 (304.563965ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (301.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2927766429 start -p stopped-upgrade-825838 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2927766429 start -p stopped-upgrade-825838 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.780183827s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2927766429 -p stopped-upgrade-825838 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2927766429 -p stopped-upgrade-825838 stop: (1.224980276s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-825838 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 19:38:05.536722    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:39:44.920957    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:39:45.767337    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:41:42.464157    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-350101/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-825838 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.143660323s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (301.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-825838
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-825838: (1.764771252s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.76s)

                                                
                                    
x
+
TestPause/serial/Start (84.11s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-327125 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-327125 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.107818783s)
--- PASS: TestPause/serial/Start (84.11s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.26s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-327125 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 19:44:44.921104    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/addons-377325/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 19:44:45.767329    4637 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-2686/.minikube/profiles/functional-752103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-327125 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.241617877s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.26s)

                                                
                                    

Test skip (36/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.49
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.49s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-351651 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-351651" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-351651
--- SKIP: TestDownloadOnlyKic (0.49s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard